Voxplainer on Scott Sumner & market monetarism

I have to admit that I usually dislike Vox. The twitter parody account Vaux News gets it kinda right in my opinion—they manage to turn anything into a centre-left talking point—and from the very beginning traded on their supposedly neutral image to write unbelievably loaded “explainer” articles in many areas. They have also written complete nonsense.

But they have some really smart and talented authors, and one of those is Timothy B. Lee, who has just written an explainer of all things market monetarism, Prof. Scott Sumner, and nominal GDP targeting. Blog readers may remember that only a few weeks ago Scott gave a barnstorming Adam Smith Lecture (see it on youtube here). Readers may also know that I am rather obsessed with this particular issue myself.*

So I’m extremely happy to say that the article is great. Some excerpts:

Market monetarism builds on monetarism, a school of thought that emerged in the 20th century. Its most famous advocate was Nobel prize winner Milton Friedman. Market monetarists and classic monetarists agree that monetary policy is extremely powerful. Friedman famously argued that excessively tight monetary policy caused the Great Depression. Sumner makes the same argument about the Great Recession. Market monetarists have borrowed many monetarist ideas and see themselves as heirs to the monetarist tradition.

But Sumner placed a much greater emphasis than Friedman on the importance of market expectations — the “market” part of market monetarism. Friedman thought central banks should expand the money supply at a pre-determined rate and do little else. In contrast, Sumner and other market monetarists argue that the Fed should set a target for long-term growth of national output and commit to do whatever it takes to keep the economy on that trajectory. In Sumner’s view, what a central bank says about its future actions is just as important as what it does.

And:

In 2011, the concept of nominal GDP targeting attracted a wave of influential endorsements:

Michael Woodford, a widely respected monetary economist who wrote a leading monetary economics textbook, endorsed NGDP targeting at a monetary policy conference in September.

The next month, Christina Romer wrote a New York Times op-ed calling for the Fed to “begin targeting the path of nominal gross domestic product.” Romer is widely respected in the economics profession and chaired President Obama’s Council of Economic Advisors during the first two years of his administration.

Also in October, Jan Hatzius, the chief economist of Goldman Sachs, endorsed NGDP targeting. He wrote that the effectiveness of the policy “depends critically on the credibility of the Fed’s commitment” — a key part of Sumner’s argument.

But read the whole thing, as they say.

*[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]

An unpublished letter to the LRB on high frequency trading

Lanchester, John. “Scalpers Inc.” Review of Flash Boys: Cracking the Money Code, by Michael Lewis. London Review of Books 36 no. 11 (2014): 7-9, http://www.lrb.co.uk/v36/n11/john-lanchester/scalpers-inc

Dear Sir,

It is striking for John Lanchester to claim that those who believe high-frequency trading is a net benefit to finance (and by extension, society) “offer no data to support” their views. Aside from the fact that he presents such views in the line of climate-change deniers, rather than a perfectly respectable mainstream view in financial economics, it doesn’t really seem like he has gone out looking for any data himself!

In fact there is a wide literature on the costs and benefits of HFT, much of it very recent. While Lanchester (apparently following Lewis) dismisses the claim that HFT provides liquidity as essentially apologia, a 2014 paper in The Financial Review finds that “HFT continuously provides liquidity in most situations” and “resolves temporal imbalances in order flow by providing liquidity where the public supply is insufficient, and provide a valuable service during periods of market uncertainty”. [1]

And looking more broadly, a widely-cited 2013 review paper, which looks at studies that isolate and analyse the impacts of adding more HFT to markets, found that “virtually every time a market structure change results in more HFT, liquidity and market quality have improved because liquidity suppliers are better able to adjust their quotes in response to new information.” [2]

There is nary a mention of price discovery in Lanchester’s piece—yet economists consider this basically the whole point of markets. And many high quality studies, including a 2013 European Central Bank paper [3], find that “HFTs facilitate price efficiency by trading in the direction of permanent price changes and in the opposite direction of transitory pricing errors, both on average and on the highest volatility days”.

Of course, we should all know that HFT narrows spreads. For example, a 2013 paper found that the introduction of an algorithmic-trade-limiting regulation in Canada in April 2012 drove the bid-ask spread up by 9%. [4] This, the authors say, mainly harms retail investors.

The evidence is out there, and easy to find—but not always easy to fit into the narrative of a financial thriller.

Ben Southwood
London

[1] http://student.bus.olemiss.edu/files/VanNessR/Financial%20Review/Issues/May%202014%20special%20issue/Jarnecic/HFT-LSE-liquidity-provision-2014-01-09-final.docx
[2] http://pages.stern.nyu.edu/~jhasbrou/Teaching/2014%20Winter%20Markets/Readings/HFT0324.pdf
[3] http://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp1602.pdf
[4] http://qed.econ.queensu.ca/pub/faculty/milne/322/IIROC_FeeChange_submission_KM_AP3.pdf

Is Uber worth $18bn?

James Ball, at The Guardian, thinks that Uber’s implicit $18bn valuation is “a nadir in tech insanity”. His case is that tech firms are overvalued because although investors know this, they always assume there are other “suckers” they can palm their securities off on. That is, they think the other guys are “behavioural” (falling prey to the sorts of biases detailed in behavioural economics and behavioural finance) but they themselves are rational. Ball is responsible for some very good and important work, but I think this particular piece would benefit from the application of some financial economics.

It’s always possible that prices are irrational. And because we can never test investors risk preference separately from the efficient markets hypothesis (the idea that markets accurately reflect preferences and expected outcomes) it’s very hard to work out if prices are off, or just incorporating some other factor (usually risk). This is called the joint hypothesis problem. But when there are two alternatives, there is a reason economists put rational expectations in their models—it’s a simpler, better explanation. Finding truly suggestive evidence of irrational price bubbles is the sort of thing that wins you a Nobel Prize not something that a casual onlooker could easily and confidently observe.

Ball might say that even if irrational pricing is rare because of the strong incentives against it in a normal market, there have certainly been episodes of it in the past. Quoting J.M. Keynes, he might say “markets can remain irrational much longer than you or I can remain liquid”. He might point to the 1999-2000 peak of what’s commonly described as the “dot com bubble”. But I urge Ball to consider a point raised in this email exchange between Ivo Welch and Eugene Fama:

How many Microsofts among Internet firms would it have taken to justify the high prices of 1999-2000?  I think there were reasonable beliefs at the time that the internet would revolutionize business and there would be many Microsoft-like success stories based on first-mover advantages in different industries.

Loughran and Ritter (2002, Why has IPO pricing changed over time) report that during 1999-2000 there are 803 IPOs with an average market cap of $1.46bn (Table 1).  576 of the IPOs are tech and internet-related (Table 2). I infer that their total market cap is about $840 billion, or about twice Microsoft’s valuation at that time.  Given expectations at that time about high tech and the business revolution to be generated by the internet, is it unreasonable that the equivalent of two Microsofts would eventually emerge from the tech and internet-related IPOs?

Has not the second wave of cyber firm success (FacebookGoogle, arguably Apple) been even more impressive than the first wave? It may well be only 25% or 10% likely that Uber turns out to be one of these behemoth firms, through network effects, first mover advantages, name-recognition or whatever—but even if the chance is small the potential rewards are huge.

But Ball may point out that even if this is true, in the (putatively) 90% likely scenario, of Uber being a failure, then all this capital is being wasted. It could be put in the projects he prefers: “green energy, modern manufacturing, or even staid-but-solid sectors like retail”. Even if rational expectations—the idea outcomes do not differ systematically (i.e. predictably) from predictions—and the efficient markets hypothesis are not violated, and risk-adjusted expected (private) returns are equal across industries, it might be that social returns from these staid-but-solid sectors are higher—after all, lots of capital is being apparently wasted when so much goes to Uber.

This does not obtain—from the prospects of society, Uber could deliver huge welfare gains. If it does turn out that Uber has enough in the way of network effects to generate returns justifying its price tag (or more) then it would have to create lots of value, by saving taxi-consumers serious money. If they are using less resources to create the same amount of goods, then they are making society better off. Since society is big and diversified, it can afford to be relatively risk neutral (at least compared to an individual), and take even 9-1 punts on the chance that one memorable, semi-established network might be a particularly good way of running a taxi market.

The eurozone is in dire need of nominal income targeting

It may well be that, in the US and UK, nominal GDP is growing in line with long-term market expectations.* It may well be that, though we will not bring aggregate demand back to its pre-recession trend, most of the big costs of this policy have been paid. And so it may be that my pet policy: nominal income/GDP targeting, is only a small improvement over the current framework here in the UK or in the US. But there is one place that direly needs my medicine.

As a whole, the Eurozone is currently seeing very low inflation, but plenty of periphery countries are already suffering from deflation. And this is not the Good Deflation of productivity improvements (can be identified because it comes at the same time as real output growth) but the Bad Deflation of demand dislocation. The European Central Bank could deal with a lot of these problems simply by adopting a nominal GDP target.

When it comes to macroeconomics, the best analysis we really have is complicated econometric models on the one side, and highly stylised theoretical models on the other. Both are useful, and both can tell us something, but they rely on suspending quite a substantial amount of disbelief and making a lot of simplifying assumptions. You lose a lot of people on the way to a detailed theoretical argument, while the empirical evidence we have is really insufficient to conclusively answer the sort of questions I’m posing.

In general, I think that very complex models help us make sense of detailed specifics, but that “workhorse” basic theoretical models can essentially tell us what’s going on here. Unemployment is a real variable, not one directly controlled by a central bank, and a bad thing for the central bank to target. But in the absence of major changes in exogenous productivity, labour regulation, cultural norms around labour, migration and so on, there is a pretty strong relationship between aggregate demand and unemployment. Demand dislocation is almost always the reason for short-run employment fluctuations.

Unemployment rose everywhere in 2008-9. But it nudged down only marginally post-crisis in the Eurozone, whereas in the UK and US it soon began to steadily fall toward its pre-crisis rate (the red line, though not on this graph, has tracked the green one very closely). In the meantime the Eurozone rate has risen up to 12%. This is not at all surprising, given the almost complete flattening off of aggregate demand in the Eurozone—this means a constantly-widening gap with the pre-recession trend (something like 20% below it now).

Although intuitively we’d expect expectations to steadily adjust to the new likely schedule, three factors mean this takes a while: firstly the ECB is very unclear about what it is going to do (and perhaps unsure itself), secondly some plans are set over long horizons, and thirdly the lacklustre central-bank response to the 2007-8 financial crisis is unprecedented in the post-war period.

1. We have a huge literature on the costs of policy uncertainty—the variance of expected outcomes has an effect on firms’ willingness to hire, invest, produce, independent of the mean expected outcome.

2. Many firms invest over long horizons. It may have become clear at some point in 2011, when the ECB raised interest rates despite the ongoing stagnation and weak recovery, that the macro planners, in their wisdom, were aiming for a lower overall growth path and perhaps a lower overall growth rate in nominal variables. And so, after 2011 firm plans started to adjust to this new reality. But many plans will have been predicated on an entirely different 2009, 2010, 2011, 2012, 2013, 2014, and so on. And as mentioned before, the gulf between what was expected for the mid-2010s back in 2007 and what actually happened is actually widening.

3. Thirdly, and finally, the period 2008-2010 is unprecedented and will have slowed down firm adjustment substantially. As mentioned above, even if firms set plans with a fairly short-term horizon (a few years) they wouldn’t have been able to adjust to the new normal in 2008, 2009 and 2010 unless they really expected the ECB’s policy of not only not returning to trend level, but not even return to trend rate!

All of these three issues are convincingly resolved by nominal income targeting. It’s very certain—indeed the best version would have some sort of very-hard-to-stop computer doing it. It promises to keep up to trend. And it is very stable over long horizons.

Recent evidence reinforces the view, implicit in our models, that (unconventional) monetary policy is highly effective at the zero lower bound, even through the real interest rate channel (!) All the ECB needs to do is announce a nominal income target.

*This reminds me: isn’t it about time we had an NGDP futures market so we could make claims here with any kind of confidence?