Most central banks around the world have a price stability mandate, and since the international monetary system regained its footing on a fiat basis after the inflations of the 1970s, that is mostly understood to mean a low inflation target. Over the past couple decades though, a growing number of economists have suggested instead an NGDP target. In George Selgin’s classic formulation, just like we expect prices of particular goods to fall if that industry becomes more productive, the price level should track changes in aggregate productivity, because this minimizes the total number of prices that have to change.
What I want to argue here is that, not only is the price level the wrong target for monetary policy, it’s also a needlessly fuzzy target compared to feasible alternatives like NGDP.
Since Covid, we’ve all become familiar with the two ways firms can deal with rising nominal costs: the usual way – raising the price on the same container of yogurt – or “shrinkflation” – reducing the quantity of yogurt at a given price.
The same is true in reverse, when nominal costs fall. Firms can cut the price of a given item. Or they can improve the item.
If it’s not clear that product improvements are just shrinkflation in reverse, think of the “quantity” of something, not as the number of units that get bought, but as the satisfaction you get from the services it provides. I paid more for my current OnePlus phone than I did for my first smartphone, a Nexus 5, back in 2013. But while they both might be thought of as “one” phone, the OnePlus provides much higher quantities of things I actually care about: communication services, entertainment services, emergency services, and so on. Even though the price of a phone has risen since 2013, the quantity of services packaged inside it has plausibly risen even more, meaning that the price of these things I care about has fallen, not risen, even in nominal terms.
A price index, which measures inflation, should track the price over time of meeting these needs, not the price of discrete goods, which may or may not be comparable over time in terms of the needs they meet. So any inflation index that looks at the latter instead of the former, will severely overstate inflation.
But the argument here is broader than just a productivity norm. Think about what you’re looking for when buying the latest summer fashions. It’s not just fabric to cover your body. It’s stylishness services. And last year’s fashions provide a lower quantity than they did last year, despite being the exact same item physically. Productivity is just a special case of the point that similar goods might package different bundles of services over time.
And this makes it tricky to decompose price changes of particular goods into quality changes (that is, changes in the quantity of the services it provides), versus actual changes in the price of those services.
Given that we can only observe things like “the price of phones” and “the price of shirts”, and not things like “the price of communication services” and “the price of stylishness services”, the Consumer Price Index Handbook suggests a few different ways of dealing with this problem:
Actual price indices turn out to be very sensitive to the particular methods used here. Index and aggregation theory are, on the whole, well-developed and rigorous in thinking through how to interpret actual data in terms of subjective meaning. I’m on record defending imputed rental equivalence, one of the more misunderstood and controversial aspects of price index calculations. But the fact is, there is simply no satisfying way to account for quality differences over time in a price index.
While most statistical agencies are savvy enough not to ignore quality change entirely, in practice, each of these methods is employed conservatively enough that we can be confident reported inflation is systematically overstated. One might even think of a 2% inflation target as, implicitly, just compensating for the systematic mismeasurement of inflation.
But can we do better?
George Selgin, Scott Sumner, David Beckworth, and many others have made convincing cases that NGDP targeting is better for financial and macroeconomic stability than inflation targeting. But in addition to that, it also has the advantage of sidestepping the issue of decomposing the price changes of goods into changes in the quantity of the services they provide, or changes in the price of those services themselves.
No doubt there is still a place for price indices, imperfect as they are, in important policy questions like keeping the purchasing power of benefit payments roughly stable, or in macroeconomic questions like growth accounting (how much of NGDP growth represents real growth versus inflation?). But for monetary policy, relying on indicators that correspond more directly to something economically meaningful – indicators like nominal GDP, that simply tally up nominal spending, rather than price levels, that demand complicated hedonic adjustments to be economically meaningful – has a number of benefits in addition to financial stability:
This argument doesn’t uniquely suggest NGDP as a target. Indeed, the money supply might work just as well, provided we measure it correctly. But it does suggest that, in an economy where tastes and technology change from year to year, inflation in particular is much too fuzzy to be an appropriate target for monetary policy.
Thomas L Hutcheson
Jan 09, 2025 at 18:04 |“the price level should track changes in aggregate productivity, because this minimizes the total number of prices that have to change.”
What?! Why would aggregate productivity affect the number/importance of changes in relative prices?
“So any inflation index that looks at the latter (quantity) instead of the former (utility), will severely overstate inflation.”
Price indices do that as well as you can be sure they do not.
“George Selgin, Scott Sumner, David Beckworth, and many others have made convincing cases that NGDP targeting is better for financial and macroeconomic stability than inflation targeting”
They have made claims for the superiority of NGDPLT over FAIT, but I have not seen a head to head analysis of how the two criteria compare in the four cases of supply/demand, positive/negative shock.
“An NGDP target improves accountability. Fuzzy as the price level is, the success of monetary policy cannot be evaluated solely on the behavior of the price level, even ex post. A single, clearer benchmark reduces the scope for excuses for monetary policy failure (and, by the same token, makes it clearer when monetary policy is not at fault).”
This exactitude is achieved by definition. There is noting “fuzzy” about the PCE as measured as a target.
If we did adopt an NGDPLT regime, would the NGDP targeted be one that in accordance with Selgin’s “Less than Zero” paper aimed for the implicit inflation target to be zero or less than zero? If not, what should the implicit target be?
John
Jan 12, 2025 at 8:56 |At this point I believe that Central Banks all refuse to hit targets when it is politically incorrect. Wrong inflation measures were not the reason countries didn’t loosen enough in the 2008 crisis, nor respond quickly to rising inflation in 2020. NGDP targeting would work better indeed but will require scenario responses that are not intuitively appealing, and hence lead to failure.
Cameron Harwick
Jan 12, 2025 at 17:45The 2008 response was plausibly caused by delayed inflation measures (they were worried about inflation right in the middle of the worst deflation in decades), and the fact that the fastest ones have an upward bias probably made it worse. I agree with you on 2020, but in both cases, at least removing plausible excuses (“Guess it’s the supply chain, what can ya do ¯\_(ツ)_/¯”) would be a step in the right direction.