Switching mobile networks is easier than switching governments

Unlike lots of people on the right, I like Owen Jones. He’s good natured and often challenges orthodoxy on his own side, and he’s a thought-provoking writer. 

Having said that, I usually disagree with what he writes on economics. His Guardian piece this week, which called for the nationalisation of the UK’s mobile network operators, was a good example. It’s tempting to dismiss it as clickbait, but it represents a train of thought that is increasing in popularity. And if nothing else it may shift the Overton Window.

Jones starts by pointing out that nationalisation of big industries is very popular among the public at large. “While our political overlords are besotted with Milton Friedman, the public seem to be lodged somewhere between John Maynard Keynes and Karl Marx.” 

A fair point. He might also have noted that the public disagrees with him about lots of other things: the obvious example is hanging, where the public is somewhere between Roger Helmer and Oswald Mosley, but there’s also immigration, which 55% of people want reduced ‘a lot’ (and another 21% want reduced ‘a little’). The Great British public thinks the benefits system is too generous by a 2-to-1 margin, and think that ‘politicians need to do more to reduce the amount of money paid out in benefits’ by a 3-to-1 margin. And so on. On these issues, and presumably many others, I assume Jones thinks the public needs further persuasion.

It isn’t necessarily that the public really is bloodthirsty or xenophobic or anti-poor or quasi-Marxist; it’s that the public is extremely uninformed about most things. How could you judge whether we needed more or less immigration if you thought we had more than twice as much immigration as we actually do? How could you judge whether the railroads should be nationalised or not if you did not know that passenger numbers had doubled since privatization, after decades of decline under the state?

Jones claims that mobile phone networks are an inefficient natural monopoly, without any real reasons given. This claim is untrue. The UK has four competing mobile networks (Vodafone, O2, Three and EE, which was formed by a merger by T-Mobile and Orange) and dozens of aftermarket “mobile virtual network operators” that lease wireless spectrum from those four networks (GiffGaff and Tesco Mobile are two popular examples). None of these networks are unusually profitable and all spend enormous amounts on marketing. Try spending a day in a city without seeing at least one advert for each company. This is not the behaviour of monopolistic industry!

(There are a couple of other frustrating errors in the piece. For instance, a typical £32-a-month 24-month contract can get you an iPhone worth £550, not a device worth £200 as Jones claims.)

Yes, signal blackspots are annoying. (Take it from someone who spent his teenage life having to walk into the garden to send a text message.) And mobile networks’ customer service really does suck sometimes! But Jones is comparing reality with an ideal where resources are infinite. Since resources are not infinite, we have to have some way of deciding what imperfections are tolerable. 

For example, as annoying as blackspots are, the optimal amount of coverage is obviously less than 100%. The phone networks reckon they cover around 99% of the population, and as frustrating as it is when you’re in that last 1%, the marginal costs rise dramatically when you try to cover that last 1%. We could cover them at great cost, meaning that we have less money to spend on other important things elsewhere. The question is one of priorities.

Ultimately, the important question that Jones does not answer (or ask) is, compared to what? Private sector firms might be irritating sometimes. Unless you can show that nationalised firms would be less irritating and better overall, that doesn’t tell us anything about what we should do. 

There are lots of examples of nationalised firms that were absolutely terrible. Tim remembers waiting three months for a landline when the GPO ran the phones; and then there is the huge drop-off in rail passenger numbers under British Rail, followed by an equally huge recovery after privatisation:

GBR_rail_passenegers_by_year

The fact that the state funded some of the scientific research that led to the iPhone doesn’t mean that we’d have better phones if we nationalised Apple. (It might be a case for state funding for scientific research that is released into the public domain, though.) As Tim says, “The State can be just as good as the market at invention, the creation of really cool new technologies. But it’s terrible compared to the market at innovation, the getting of that new technology into peoples’ hands so that they can do cool and interesting new things with it.” 

Economies of scale exist, as Jones suggests, but so do diseconomies of scale. Firms can be too big. And when you have a single network (whether it’s privately or publicly owned), customers lose all ability to ‘exit’ a firm that is giving them a bad service, so the only recourse they have is at the ballot box. 

Which brings us back to the first problem with Jones’s piece: politics is a complicated business about which we know little. If we don’t like what we’ve got, we have to hope that a majority of other voters agrees with us – and even if we’re right, they may not be informed enough to agree with us. 

It’s a lot easier to switch mobile phone providers than it is to switch governments. Ultimately, it’s that pluralism and freedom of exit that drives improvements in markets, and tends to make governments relatively bad at doing things. For all the mobile network industry’s problems, the question is: compared to what?

A bankers’ ethics oath risks being seen as empty posturing

The suggestion put forward yesterday by ResPublica think-tank that we can restore consumer trust and confidence in the financial system, or prevent the next crisis by requiring bankers to swear an oath seems excessively naïve.

Such a pledge trivializes the ethical issues that banks and their employees face in the real world.  It gives a false sense of confidence that implies that an expression of a few lines of moral platitudes will equip bankers to resist the temptations of short-term gain and rent-seeking behavior that are present in the financial services industry.

In fairness to ResPublica’s report on “Virtuous Baking” the bankers’ oath is just one of many otherwise quite reasonable proposals to address the moral decay that seems to be prevalent in some sections of the banking industry.

I don’t for a moment suggest that banking, or any other business for that matter, should not be governed by highest moral and ethical standards.  Indeed, the ResPublica report is written from Aristotelian ‘virtue theory’ perspective that could be applied as a resource for reforming the culture of the banking industry.  ‘Virtue theory’ recognizes that people’s needs are different and virtue in banking would be about meeting the diverse needs of all, not just the needs of the few.

The main contribution of the “Virtuous Banking” report is to bring the concepts of morality and ethical frameworks into public discourse.  Such discourse is laudable but we should be under no illusion that changing the culture of the financial services industry will be a long process. Taking an oath will not change an individual’s moral and ethical worldview or behaviour.  The only way ethical and moral conduct can be reintroduced back into the banking sector is if the people who work in the industry were to hold themselves intrinsically to the highest ethical and moral standards.

Bankers operate within tight regulatory frameworks; the quickest way to drive behavioural change is therefore through regulatory interventions.  However, banking is already the most regulated industry known to man and regulation has not produced any sustainable change in the banks’ conduct.  One of the key problems with prevailing regulatory paradigms is that regulation limits managerial choice to reduce risk in the banking system, rather than focuses on regulating the drivers for managerial decision-making.

Market-based regulations that do not punish excellence but incentivize bankers to seriously think through the risk-return implications of their business decisions, will be good for the financial services industry and the economy as a whole.  A regulatory approach that makes banks and bankers liable for their decisions and actions through mechanisms such as bonus claw-back clauses will be more effective in reducing moral hazard at the systemic level and improving individual accountability at the micro level than taking a “Hippocratic” bankers’ oath.

An unpublished letter to the LRB on high frequency trading

Lanchester, John. “Scalpers Inc.” Review of Flash Boys: Cracking the Money Code, by Michael Lewis. London Review of Books 36 no. 11 (2014): 7-9, http://www.lrb.co.uk/v36/n11/john-lanchester/scalpers-inc

Dear Sir,

It is striking for John Lanchester to claim that those who believe high-frequency trading is a net benefit to finance (and by extension, society) “offer no data to support” their views. Aside from the fact that he presents such views in the line of climate-change deniers, rather than a perfectly respectable mainstream view in financial economics, it doesn’t really seem like he has gone out looking for any data himself!

In fact there is a wide literature on the costs and benefits of HFT, much of it very recent. While Lanchester (apparently following Lewis) dismisses the claim that HFT provides liquidity as essentially apologia, a 2014 paper in The Financial Review finds that “HFT continuously provides liquidity in most situations” and “resolves temporal imbalances in order flow by providing liquidity where the public supply is insufficient, and provide a valuable service during periods of market uncertainty”. [1]

And looking more broadly, a widely-cited 2013 review paper, which looks at studies that isolate and analyse the impacts of adding more HFT to markets, found that “virtually every time a market structure change results in more HFT, liquidity and market quality have improved because liquidity suppliers are better able to adjust their quotes in response to new information.” [2]

There is nary a mention of price discovery in Lanchester’s piece—yet economists consider this basically the whole point of markets. And many high quality studies, including a 2013 European Central Bank paper [3], find that “HFTs facilitate price efficiency by trading in the direction of permanent price changes and in the opposite direction of transitory pricing errors, both on average and on the highest volatility days”.

Of course, we should all know that HFT narrows spreads. For example, a 2013 paper found that the introduction of an algorithmic-trade-limiting regulation in Canada in April 2012 drove the bid-ask spread up by 9%. [4] This, the authors say, mainly harms retail investors.

The evidence is out there, and easy to find—but not always easy to fit into the narrative of a financial thriller.

Ben Southwood
London

[1] http://student.bus.olemiss.edu/files/VanNessR/Financial%20Review/Issues/May%202014%20special%20issue/Jarnecic/HFT-LSE-liquidity-provision-2014-01-09-final.docx
[2] http://pages.stern.nyu.edu/~jhasbrou/Teaching/2014%20Winter%20Markets/Readings/HFT0324.pdf
[3] http://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp1602.pdf
[4] http://qed.econ.queensu.ca/pub/faculty/milne/322/IIROC_FeeChange_submission_KM_AP3.pdf

Is Uber worth $18bn?

James Ball, at The Guardian, thinks that Uber’s implicit $18bn valuation is “a nadir in tech insanity”. His case is that tech firms are overvalued because although investors know this, they always assume there are other “suckers” they can palm their securities off on. That is, they think the other guys are “behavioural” (falling prey to the sorts of biases detailed in behavioural economics and behavioural finance) but they themselves are rational. Ball is responsible for some very good and important work, but I think this particular piece would benefit from the application of some financial economics.

It’s always possible that prices are irrational. And because we can never test investors risk preference separately from the efficient markets hypothesis (the idea that markets accurately reflect preferences and expected outcomes) it’s very hard to work out if prices are off, or just incorporating some other factor (usually risk). This is called the joint hypothesis problem. But when there are two alternatives, there is a reason economists put rational expectations in their models—it’s a simpler, better explanation. Finding truly suggestive evidence of irrational price bubbles is the sort of thing that wins you a Nobel Prize not something that a casual onlooker could easily and confidently observe.

Ball might say that even if irrational pricing is rare because of the strong incentives against it in a normal market, there have certainly been episodes of it in the past. Quoting J.M. Keynes, he might say “markets can remain irrational much longer than you or I can remain liquid”. He might point to the 1999-2000 peak of what’s commonly described as the “dot com bubble”. But I urge Ball to consider a point raised in this email exchange between Ivo Welch and Eugene Fama:

How many Microsofts among Internet firms would it have taken to justify the high prices of 1999-2000?  I think there were reasonable beliefs at the time that the internet would revolutionize business and there would be many Microsoft-like success stories based on first-mover advantages in different industries.

Loughran and Ritter (2002, Why has IPO pricing changed over time) report that during 1999-2000 there are 803 IPOs with an average market cap of $1.46bn (Table 1).  576 of the IPOs are tech and internet-related (Table 2). I infer that their total market cap is about $840 billion, or about twice Microsoft’s valuation at that time.  Given expectations at that time about high tech and the business revolution to be generated by the internet, is it unreasonable that the equivalent of two Microsofts would eventually emerge from the tech and internet-related IPOs?

Has not the second wave of cyber firm success (FacebookGoogle, arguably Apple) been even more impressive than the first wave? It may well be only 25% or 10% likely that Uber turns out to be one of these behemoth firms, through network effects, first mover advantages, name-recognition or whatever—but even if the chance is small the potential rewards are huge.

But Ball may point out that even if this is true, in the (putatively) 90% likely scenario, of Uber being a failure, then all this capital is being wasted. It could be put in the projects he prefers: “green energy, modern manufacturing, or even staid-but-solid sectors like retail”. Even if rational expectations—the idea outcomes do not differ systematically (i.e. predictably) from predictions—and the efficient markets hypothesis are not violated, and risk-adjusted expected (private) returns are equal across industries, it might be that social returns from these staid-but-solid sectors are higher—after all, lots of capital is being apparently wasted when so much goes to Uber.

This does not obtain—from the prospects of society, Uber could deliver huge welfare gains. If it does turn out that Uber has enough in the way of network effects to generate returns justifying its price tag (or more) then it would have to create lots of value, by saving taxi-consumers serious money. If they are using less resources to create the same amount of goods, then they are making society better off. Since society is big and diversified, it can afford to be relatively risk neutral (at least compared to an individual), and take even 9-1 punts on the chance that one memorable, semi-established network might be a particularly good way of running a taxi market.

Markets do set rates: A reply to Julien Noizet

Financial analyst and blogger Julien Noizet has replied to my article on mortgage rates on his blog. It is a good piece, worth reading, but I still think I am right. It is perhaps true that Noizet is right too, because my claim was really very modest: in total, mortgage interest rates do not mechanically vary with the Bank of England’s base rate; we can show this because the spread between them and the base rate varies extremely widely; and since we have very strong independent reasons to expect that market forces largely drive rate moves, that should be our back-up explanation. The implication of this I was interested in was that this meant a hike in Bank Rate wouldn’t necessarily drive effective rates up to a point that would substantially increase the cost of servicing a mortgage and hence compress the demand for (London) housing.

Even if the first graph in Noizet’s blog post did appear to support his narrative that effective market rates follow Bank Rate moves, I’m not sure why these disaggregated numbers matter given that the spread between overall effective rates on both new and existing mortgages varied so widely. If it turns out that specific mortgage types varied closely with Bank Rate but the overall picture did not, then markets still control effective rates, they just do it via a changing composition of mortgages, not by changing the rates on particular products. The effect is the same—and it is the effect we see in the Bank’s main series for effective rates secured on dwellings. But the graph, to me, looks a lot like mine, despite the effect of new reporting standards: mortgage rates are about a percentage point from the base rate until 2008, then they don’t fall nearly as far as the base rate in 2008 and they stay that way until today. If other Bank schemes, like Funding for Lending or quantitative easing were overwhelming the market then we’d expect the spread to be lower than usual, not much higher.

His second big point, that the spread between the Bank Rate and the rates banks charged on markets couldn’t narrow any further 2009 onwards perplexes me. On the one hand, it is effectively an illustration of my general principle that markets set rates—rates are being determined by banks’ considerations about their bottom line, not Bank Rate moves. On the other hand, it seems internally inconsistent. If banks make money (i.e. the money they need to cover the fixed costs Julien mentions) on the spread between Bank Rate and mortgage rates (i.e. if Bank Rate is important in determining rates, rather than market moves) then the absolute levels of the numbers is irrelevant. It’s the spread that counts. But the whole point of my post is demonstrating that the spread changes very widely, and none of Julien’s evidence seems to me to contradict that claim. Indeed, Noizet’s very very good posts on MMT, which stress how deposit rates are much more important as a funding cost than discount rates for private banks, seem at odds with what he’s written in this post. And supporting this story is the fact that the spread between rates on deposits (both time and sight) and mortgages changes much less widely. If we roughly and readily average time and sight on the one side and average existing and new mortgages on the other, the spread goes no higher than 2.3 percentage points and no lower than 1.48.

In general with the post I don’t feel I understand the mechanisms Noizet is relying on, perhaps I’m misunderstanding him, but the implications of his claims regularly seem to contradict our basic models of markets. For example, he says that a rate rise would lead banks to try and rebuild their margins and profitability. But I can’t see any reason why banks wouldn’t always be doing that. The mortgage market is fairly competitive, at least measured by the numbers of packages on offer and the relatively small differences between their prices. I don’t think Julien has presented any mechanism to suggest why banks would suddenly want to maximise profit after a rate rise but wouldn’t beforehand—or why they’d suddenly be able to ignore their competitors but couldn’t beforehand. It’s possible there is one, but I can’t see that he’s explained it. Overall I suspect I’ve missed something crucial, so I welcome any more comments Julien has on the issue.