Good afternoon. This is more of a work-in-progress post than most. I am working on a large project to understand how (if at all) learning curves should be applied for public energy subsidies. For now, I would not be a hard ‘no’ on this question, but much greater caution should be exercised in subsidizing learning-by-doing. Today, I will go into some reasons for this, but more so, I want to offer an introduction to what learning curves are and look at two major issues: how strong the evidence is that cumulative production causes prices declines, and declining learning rates.
Learning curves start with a universal intuition. When you or I work repeatedly at some task, we get better at it. “Better” can mean we do it faster, or we can do it at less cost. This is why employers value years of experience at a job so highly. This improvement does not necessarily come about because of any significant new knowledge.
A similar phenomenon has been widely observed in industry, going back to a 1936 paper by Theodore Paul Wright, “Factors Affecting the Cost of Airplanes”. In this paper, Wright observes that, the more airplanes that industry produces, the cheaper they get. This can be expressed as follows:

Here, Y_n is the cost of producing the n-th unit, and Y_0 is the cost of the first unit. X is the number of units produced, and b is the progress ratio. The progress ratio is the expected cost decline when cumulative production doubles. For example, if the progress ratio b is 0.8 (a fairly typical value), then if cumulative production doubles, the unit cost is 80% of what it was before.
There are many variations of this formula, some of which we’ll look at here, but the basic formula continues to be very relevant nearly a century later. This is unfortunate in that the durability of the formula masks some of its weaknesses.
Wright’s Law is used across many industries, and it is perhaps used most heavily in the context of clean energy. Reducing carbon dioxide emissions is a high priority, but it is expected that doing so will be a lengthy and costly endeavor. How costly will it be, and how can we achieve decarbonization at the lowest cost? To answer this, Wright’s Law is invoked to model the cost trajectory of clean energy technologies. Furthermore, subsidies for clean energy are heavily predicated on the expectation that, as a result of learning curve effects, the cost of clean energy technologies will decline with increasing deployment. Therefore, to craft effective decarbonization policies, we need to understand how well this assumption works.
Despite some weaknesses, the existence of learning curve effects themselves are firmly established across a wide range of industries. Here are just a few of the results.
In 1968, Bruce Henderson of the Boston Consulting Group published an extensive review of learning curves across industries. He finds that a 20-30% progress ratio (that’s the b term in the above equation) is typical. Henderson posits these progress ratios as constant, even as overall volume grows very large, but in a companion piece offers this bit of wiggle room:
However, these observed or inferred reductions in costs as volume increases are not necessarily automatic. They depend crucially on a competent management that seeks ways to force costs down as volume expands.
Using a database of price history of various technologies across industries, researchers from the Santa Fe Institute have found that Wright’s Law mades good forecasts of technology prices, though Moore’s Law is not far behind. More about this distinction below. Here is a good explainer on the subject. This pieces explicitly analyzes direct air capture, a technology for removing carbon dioxide from the atmosphere, and Stripe’s plan to “buy down” the cost of DAC through early procurement so that it becomes a cost-effective tool for carbon mitigation.
Here is a paper that finds learning rates for various industrial technologies in Singapore.
As noted above, learning curves are now most studied in the context of energy. This paper examines oil shale in the United States and finds a learning rate, depending on the model, of around 3-4%. If I read this correctly, that means a progress ratio of 96-97%, which means that we should not expect much learning-by-doing cost reduction from shale oil, relative to other technologies.
This paper finds a learning rate of 15% for wind and 24% for solar. They also posit that the levelized cost of electricity, rather than installed cost, is a better model for estimating learning rates. LCOE, which is the price that a plant operator would have to receive over the lifetime of the project to break even, is also a more relevant metric for society, though it doesn’t account for broader system costs like grid integration and the need for energy storage.
This literature review shows learning rates for 11 electricity technologies. The ranges are large, but the learning rate is highest for solar photovoltaic. Some technologies, such as onshore wind, nuclear, and natural gas combined cycle have ranges of learning rates that include negative values, which means that costs go up with deployment. Integrated gasification combined cycle (a common type of coal plant) has rates ranging from 2.5 to 20%. Hydroelectricity is at a modest 1.4%.
As an aside, critics of nuclear power seize upon the supposed negative learning rate as an indicator that nuclear power can never be financially viable. In my mind, negative learning rates tell us that there is something wrong with the learning curve model, as there is no plausible causal mechanism by which building more of something would cause the price to rise. What must be really happening is that there are factors other than the volume of deployment that is causing prices to rise. When someone is opposed to nuclear power for ideological reasons, they will be incurious as to what those other factors may be.
Here is another paper that estimates learning rates for various energy technologies and connects those learning rates to technological diffusion. And this paper looks at various emerging energy efficiency technologies with iron and steel production and finds that, while they do not appear to be feasible today (“today” being 2015, when the paper was published), they might be feasible in the future with learning-by-doing cost reductions.
Learning rates can be used to forecast the future evolution of the energy system and the implications of transitioning to an energy system dominated by low-carbon sources. This paper argues that, due to learning effects, deployment of a low-carbon energy system dominated by solar PV, wind, batteries, and hydrogen electrolyzers will be a financial positive. This 2022 paper estimates that if learning rates for clean energy technologies persist for a decade, then a clean energy system should be achievable in 25 years. This paper estimates that by midcentury, cheaper electricity, driven by learning curves, should result in an energy system that provides 66% of final energy as electricity. That number is about 20% now.
That’s enough for now. Many of these papers themselves discuss other results in the well-studied area of learning curves. But now I want to turn attention to some of their limitations.
The first challenge with using learning curves relates to causality. Most results function by tracing cumulative production of an item over time with its unit cost, and with these two values, a curve between them can be fit. But how do we know that cumulative production causes price declines? When a technology shows increasing production and declining prices, we can plausibly tell a story where causality goes the other way. Maybe prices decrease for unrelated reasons, such as some other technological innovation, and the lower price causes production to go up. Causality can be quite difficult to prove.
Moore’s Law is a well-known model, attributed to Gordon Moore, cofounder and later CEO of Intel, that the number of transistors on a computer chip tends to double every 18 months. There is debate as to whether the “law”, which is really more of an empirical observation than a law, still holds, but the famous recent paper by Bloom et al. finds that ever greater amounts of research are required to keep this progress going. What exactly drives Moore’s Law can be debated, but here I am using it as a stand-in for cost reductions that are driven by factors other that cumulative production.
Matt Clancy deals with the issue here. One paper Clancy discusses is this one by Lafond et al. which hindcasts the cost of solar photovoltaics using two models: the learning curve model discussed above; and a Moore’s Law model, in which the cost of solar panels decreases due to exogenous technological change. It turns out that the two models yield very similar forecasts for PV prices, which means that it is difficult to know which is right.
Another paper Clancy discusses is this one related to American military production around World War II. This is fairly close to a natural experiment in which production suddenly shot up for reasons unrelated to the cost of manufactured goods. They find that the of the cost (labor and financial) of war production, about half can be explained by learning curves, and about half is explained by other factors. In other words, we have roughly split the difference between the Wright’s Law and the Moore’s Law explanation for production costs.
Although we should be careful about extrapolating these findings too widely, Clancy does cite some other research that offers comparable conclusions. We should conclude that, while learning-by-doing is very real, it is not as strong as naive comparisons of the cost curve and production curve would suggest.
The other issue I want to discuss today is that of declining learning rates. As a mathematician, I find the terminology a bit confusing, and so I will try to clarify as best as I understand it. If you take the derivative of Y_n in terms of X in Wright’s Law formula, the derivative will converge to 0 as X approaches infinity. This will be true under any reasonable model; price cannot go below zero, and so it must converge to some value, and that means the derivative converges to zero.
Instead, if you take a log/log plot, then Y_n is a linear function of X with a negative slope. It is that slope that Wright’s Law says will be constant, and when we talk about a declining learning rate, we mean that the slope must decrease in magnitude.
There are several lines of evidence for a declining learning rate. This paper by Evan Boone looks at various goods that the U.S. Air Force procures and finds that a model with a declining learning rate fits the data better than the traditional Wright’s Law model. A follow-up paper by Brandon Johnson makes a similar finding. I find these results to be plausible, though—without going into the mathematical details—one weakness is that the models don’t seem to me to have a very strong theoretical basis.
There is also the idea of a two-factor learning curve. This modifies Wright’s Law by introducing a second term: cumulative research and development spending. This gets back to the issue of decomposing a Wright and a Moore effect on a technology’s cost trajectory, and it implies that a strategy that focuses on deployment and research simultaneously for a new technology will be more effective than a strategy that focuses on just one of those things.
For highly complex manufactured goods—which is the norm—it may be better to think of learning curves as composed of multiple sub-systems, each of which has its own learning rate and a cumulative deployment which may be separate from the final good. This paper considers such a model in the context of small modular reactors. In such a scenario, the initial learning rate of the final product may be high, as the product’s costs are dominated by sub-systems with high learning rates. However, as cumulative production increases, those sub-systems with higher learning rates become a smaller share of overall costs, which means that the learning rate for the final product should converge to whatever the smallest learning rate is among sub-systems. For those familiar, the phenomenon is analogous to the Baumol effect in the broader economy.
I have much more to say about learning curves, and I intend to come back to this subject, but for now I will offer some tentative concluding thoughts.
First, learning curves are very real and are an important tool for understanding decarbonization pathways. However, there is a tendency to apply them naively and craft policies that assume overly optimistic learning rates. Understanding the issues of causality and declining learning rates is important for crafting more rational policies. There are other considerations, such as whether learning-by-doing is really an external factor, that I wasn’t able to get to today but are also important.
Second, from the perspective of long-term economic growth, learning is an important and under-appreciated factor. This is a major reason why looming population declines in most of the world are threats to economic growth: smaller markets diminish the potential for the scale that makes learning-by-doing a reality.
Quick Hits
Next Thursday, we will mark the 80th anniversary of D-Day, the June 6, 1944 Normandy landings that began the liberation of France in World War II. If you know any living World War II veterans, now would be the time to ask about their experiences, as they won’t remain with us for long. The American National World War II Museum estimates that, of 16 million American World War II veterans, about 120,000 were still alive as of 2023. Those that remain are at least in their upper 90s.
As for much less weighty anniversaries, May 27 was Dragon Quest Day, marking the release of Dragon Quest in Japan on May 27, 1986. Those games were initially localized in the United States as Dragon Warrior, as the Dragon Quest trademark was taken. The series was once my second favorite games series after Final Fantasy, and Dragon Warrior for the Nintendo Entertainment System was my first role-playing game, but over time I have lost interest with the direction that the series has gone.
A recent paper by Daron Acemoglu finds that artificial intelligence brought about a 0.66% increase in total factor productivity over the last 10 years, or 0.066% per year, and it projects an increase over the next 10 years of at most 0.53%, or 0.053% per year. While significant, these figures contrast to annual increases that are typically 2-4% per year in the United States since 2007.
The search for technosignatures of alien life has uncovered several M-dwarf stars that emit anomalously high levels of infrared radiation, which could be a sign of a Dyson swarm or Dyson sphere. These are, respectively, very large collections of satellites or shells around a star that harvest a substantial portion of stellar energy, which in turn must be re-emitted in the infrared spectrum. M-dwarf (red dwarf) stars may be inhospitable to the development of intelligent life, and so either life found a way regardless to develop to the Dyson swarm level; the civilizations that built the megastructures migrated from other star systems; or what I think is most likely, that there is explanation for this radiation that does not involve astroengineering. This episode reminds me of Tabby’s Star, named for the astronomer Tabatha Boyajian, which also showed a suspicious light pattern. Tabby’s Star is an F-sequence star, which is more like the Sun than M-dwarf stars and may be more amenable to the development of life, but nevertheless the hypothesis of astroengineering around that star is not favored by astronomers.
I suspect the 'multi-component' learning curves is the main driver of the slowing rates. Most of the media follows the mass manufactured components learning rate, rather than total installed cost as well, which makes the learning look faster than the realized cost improvements actually are...
The next biggest is outright physical limits. We can do no better than Carnot efficiency, which puts a hard cap on one improvement factor for example.