Are Bank Capital Requirements Really Ten Times Higher than Before the Crisis?

A major theme in Bank of England speeches over the last three years has been the ‘Ten Times’ story: bank capital requirements are now 10 times higher than what they were at or before the time of the GFC. Here are some examples:

  • “Capital requirements for banks are much higher … In all, new capital requirements are at least seven times the pre-crisis standards for most banks. For globally systemic banks, they are more than ten times.” (Mark Carney, 2014) [1]
  •  “ … the capital requirements of our largest banks are now ten times higher than before the crisis.” (Mark Carney, 2015) [2]
  • “Common equity requirements are seven times the pre-crisis standard for most banks. For global systemically important banks (G-SIBs), they are more than ten times higher.” (Mark Carney, 2016) [3]
  • The largest banks are required to have as much as ten times more of the highest quality capital than before the crisis … (Mark Carney, 2017, his emphasis) [4]

This latter claim is particularly significant because Governor Carney is referring to the largest banks in the world and was writing in his capacity as chairman of the Financial Stability Board (i.e., as the world’s most senior financial regulator) to the leaders of the G20 countries. He could hardly have chosen a more conspicuous forum in which to make his point.

At first sight, these claims are very reassuring. After all, if bank capital requirements are now ten times greater than they were before the GFC, that must mean that our banks are now much more resilient, right?

Wrong.

Let’s consider the evidence. 

The evidence for Governor Carney’s claims would appear to be the capital requirements in the following Table (Table B.2) from the Bank’s July 2016 Financial Stability Report:

Notes to Table B.2:

(a)   Expressed as a proportion of risk-weighted assets. An additional 1.5% of risk-weighted assets must be held in at AT1 [Additional Tier 1 capital] as part of the Basel III Pillar 1 requirement. UK banks are also subject to Pillar 2A requirements.

(b)   See Caruana, J. (2012) ‘Building a resilient financial system’, available at www.bis.org/speeches/sp120208.pdf.

(c)   in a standard environment.

This Table indicates that the minimum Basel II core Tier 1 (CT1) capital requirement was 1 percent using Basel III definitions. The lines below show the additional requirements for the ratio of Common Equity Tier 1 (CET1) capital to Risk-Weighted Assets (RWAs), which sum to 9 to 11.5 percent depending on the settings for the systemic and countercyclical capital buffers. The systemic buffer is likely to have an impact of no more than 0.5 percent of RWAs, however.[5] As for the countercyclical buffer, the Bank of England announced on June 27th that this buffer would be raised from 0 percent to 0.5 percent. Therefore the actual value of the ‘Basel III CET1 minimum with buffers’ term at the bottom of the table should be no more than 8 percent, but let’s call it 8 percent to be on the generous side.

One can then say that CET1 Pillar 1 capital requirements involving RWAs are currently 8 times their Basel II counterparts. One can also say that the system envisages the potential for CET1 ratio capital requirements to be 11.5 times their Basel II counterparts – and even higher if one takes account of higher systemic buffers or a higher countercyclical capital buffer or the Pillar 2A requirements mentioned in Note (a) of the Table. 

At first sight, such an increase in capital requirements might appear impressive. But consider the starting base. Under Basel II, RWAs could be a hundred times bank capital. When calculating its supplementary leverage ratio buffers the Bank uses a working assumption that the ratio of RWAs to total assets is 35 percent,[6] and 35 percent is also a rough approximation of the empirical ratio of RWAs to total assets across the UK banking system. Applying this ratio, total assets might have been 100 ÷ 35% = 285.7 times capital: banks could be leveraged by a factor not far short of 300 under the old rules. Given that UK bank CET1 ratio capital requirements are currently 8 times what they were before the crisis, current requirements would still allow banks to be leveraged by a factor of 285.7 ÷ 8 = 35.7. This is a high level of leverage and high leverage was a major contributor to the severity of the crisis. 

And I have not taken account of how UK banks could increase their leverage further by switching into assets with lower risk weights or by moving positions from their banking books to their trading books.

The bottom line is that a large percentage increase in capital requirements does not represent a large absolute increase in capital requirements if the base is low to start with.

And why was the base so low? Because Basel II imposed extremely low minimum capital requirements. Correctly interpreted, Governor Carney’s ’10 times’ narrative (or to use the more accurate figure, an ‘8 times’ narrative) does not imply that banks now face high capital requirements; it is, instead, a damning indictment of the inadequacy of Basel II.

One can also look at this issue another way. The capital ratios that matter are not those based on the highly unreliable RWA measure: the ratios that matter are the leverage ratios. Basel II had no minimum required leverage ratio and Basel III introduced a minimum required leverage ratio of 3 percent. But this 3 percent minimum required leverage ratio is specified with Tier 1 capital as the numerator[7] and the leverage exposure as denominator. When one converts this leverage ratio into the ratio of CET1 to total assets using Basel rules and recent data for UK banks, the minimum ratio of CET1 to total assets is about 2.4 percent, allowing for a leverage factor of over 40.

Therefore, one can say that when it comes to the leverage ratio, the Basel III requirements are not 8 times or 10 times or even 20 times what they were: they are infinitely greater than what they were. Even so, they are still too low.

It is not for nothing that Martin Wolf has described Basel III as the mouse that did not roar.[8]

Slinging around multiples of capital ratios is great fun, but there is a serious side too. The question one must ask is why does the Bank choose to emphasise this 10 times narrative to make their point that UK banks are now strong again, when the underlying facts on the ground – the empirical leverage ratios (see here or here) – do not support that narrative.

To illustrate, consider the following chart (Chart B.2) from the BoE’s November 2016 Financial Stability Report

Major UK Banks’ Leverage Ratios

Sources: PRA regulatory returns, published accounts and Bank calculations.

Notes to Chart B.2:

(a)   Prior to 2012, data are based on the simple leverage ratio defined as the ratio of shareholders’ claims to total assets based on banks’ published accounts (note a discontinuity due to introduction of IFRS accounting standards in 2005, which tends to reduce leverage ratios thereafter).

(b)   Weighted by total exposures.

(c)  The Basel III leverage ratio corresponds to aggregate peer group Tier 1 capital over aggregate leverage ratio exposure. Up to 2013, Tier 1 capital includes grandfathered capital instruments and the exposure measure is based on the Basel 2010 definition. From 2014 H1, Tier 1 capital excludes grandfathered capital instruments and the exposure measure is based on the Basel 2014 definition.

This chart shows that in terms of actual capital-to-asset ratios, we are roughly back to 2002 levels and about 1.5 (not ten) times higher than 2006-7, the eve of the crisis. And these are book-value figures. In terms of market values, the Bank of England’s own data suggest that UK banks’ capital ratios are well below what they on the eve of the crisis.[9]

Ten times capital requirements or no, UK banks are still far from resilient. One can only hope that they will not have to go through another major stress any time soon. 

 

End Notes

* I thank Anat Admati, Tim Bush, Jim Dorn, James Ferguson, Gordon Kerr and Sir John Vickers for helpful discussions on this topic.

[1] M. Carney, “The future of financial reform,” 17 November 2014, p. 4.

[2] M. Carney, public statement made on the morning of June 24th 2015 shortly after the result of the Brexit vote was announced.

[3] M. Carney, “Redeeming an unforgiving world,” 26 February 2016, p. 8.

[4] M. Carney, Letter to G20 leaders, 3 July 2017, p. 1.  As an aside, I have a bone to pick with Governor Carney’s statement that banks “have” capital. To say that banks “have” capital (or that they “hold” or “hoard” capital, etc., other common errors of the same nature) is to suggest that capital is an asset to a bank and thereby subscribe to the ‘capital is a rainy day fund’ fallacy that has been debunked by Admati and Hellwig. Capital is a source of finance to a bank, not an asset. Banks do not “have” capital; they issue it. Dr. Carney confuses what banks invest in with how they finance themselves, and it is important to get these things right.

[5] The situation is however quite complicated and I won’t attempt to summarise it here. Instead, I would refer the reader to J. Vickers “The Systemic Risk Buffer fro UK Banks: A Response to the Bank of England’s Consultation Paper,” Journal of Financial Regulation, Volume 2, Number 2, pp. 264-282.

[6] See Bank of England (2015) “Supplement to the December 2015 Financial Stability Report: The framework of capital requirements for UK banks,” box 1.

[7] This Tier 1 capital measure is Basel III specific and is not to be confused with core Tier 1 or Tier 1 as defined under Basel II!

[8] M. Wolf, “Basel: the mouse that did not roar,” Financial Times, 10 September 2010.

[9] To illustrate, the market-value Simple Leverage Ratio, the ratio of Shareholder Equity to Total Assets, fell from 8.0 percent going into 2006 to 5.28 percent in November 2015, representing a decrease of 34 percent. See K. Dowd, "The Fiction of the 'Great Capital Rebuild'," Adam Smith Institute blog, July 6th 2017.

Nothing like making political capital out of a tragedy, eh?

Dawn Foster tells us that Grenfell Tower is all because of, well, umm, markets? Fatcher? Neoliberalism perhaps

After the fire, as details emerged about the intricacies of how the blaze progressed, the focus zoned in on such things as cladding and the provision of sprinklers. But survivors are clear that the inferno was not just a freak accident but the result of decades of neglect and poor policymaking; an indictment of how Britain houses its poorest people.

Across the UK, many others are suffering similar effects of the housing crisis. It has never been a crisis purely of supply and demand, but of shifts in legal tenure, the erosion of housing rights, the decimation of legal aid, the mass sell-off of social housing, and a growing callousness in attitudes towards vulnerable people.

Something did quite obviously go wrong and we'd love to find out what it was. Which we will and then we'll know and it won't be done again. But this isn't in the slightest about the general structure of Britain's housing market:

For Grenfell Tower survivors, empathy was plentiful at first, as the donations and flood of volunteers rushing to west London showed. Then came the chiding calls not to politicise the tragedy, as survivors themselves stated publicly that their ordeal was political, and the snide backlash when the City of London corporation announced they had set aside 68 flats in a Kensington development for survivors.

Let's remind ourselves of the background here. This social housing, this social rather than market asset that it is claimed we should behaving. Grenfell was exactly that social asset, it was social housing. At non-market prices. When it did burn down then equal if not better housing was found a mile or so away, right in the centre of one of the world's most expensive cities. People will be on the same tenure terms as they were.

Again, there has undoubtedly been a failure, that building going up like a torch has shown that. But this isn't a failure of Britain's housing market in the slightest. Nor has the coping with it been badly done. It's actually worth remarking upon quite how well that provision of non-market housing has worked.

There is another point we should make from the evidence Ms. Foster presents us with:

The 80 people who died in the disaster and those who escaped the fire are at the extreme of the spectrum, but currently, there are almost 120,000children homeless or living in temporary accommodation in England.

There are 120,000 children classified as homeless, not 120,000 children actually homeless. This distinction matters. For we have a system to find homes for those who don't have them. Entry into the system requires that one be declared to be homeless first. Thus the classification of homelessness is a necessary precondition of the system which deals with the problem swinging into action. This is of course Worstall's Fallacy all over again, looking at the size of the original problem, not the effectiveness (or not) of the system we have to deal with it.

The residual problem is those sleeping rough. Of whom there are some 4,000 nationally (no, not children, total) at any one time. And, as Ms. Foster tells us, this is a rather different problem:

The footballer turned property developer Gary Neville admitted to the Guardian this week how difficult it was to provide the kind of help rough sleepers need – 40 of the group Neville and Ryan Giggs allowed to stay in their vacant Manchester hotel after it was squatted were rehoused, with 20 ending up back on the streets.

As a little prompting will elicit from anyone involved with aiding those rough sleepers. The prevalence of addictions and mental health problems (variously estimated at up to 75% of this population) means that the difficulty is in keeping them in accommodation, not finding it for them.

There are undoubtedly problems in Britain's housing markets. The Town and Country Planning Act comes to mind. Fire regulations equally. But we have a social housing system, one that really works rather well. We can see that from the residual of the problem after it swings into action. We have many classified as homeless, we have many at risk of it, and absent drug, alcohol and mental health issues we have just about no one who is actually homeless.

That's actually pretty good for government work.

Universities need skin in the game

Derided at the time (and by many today) the Conservative-Liberal Democrat decision to treble tuition fees is signicantly underrated. As I pointed out, Jeremy Corbyn was wrong to state that £9k fees had put working class applicants off a University education. In fact, high tuition England has done better than tuition free Scotland at getting disadvantaged students to apply.

But the system is far from optimal.

Applicants often make poor degree choices, let down by poor quality information on employability. Indeed, there are 23 universities where graduates actually earn less than non-grads.

But that’s not the only problem. Income contingent loans mean that higher earners pay back more than lower earners. This is what makes this system ‘progressive’. Student Loans are underwritten by the taxpayer and in four fifths of cases the taxpayer ends up writing off some debt. The IFS estimates that around 43% of the total debt goes unpaid.

And there’s a massive difference between the winners and losers. If you end up in the bottom 20% of graduate earners it’s likely that on average you’ll never make an annual repayment of more than £500. Meanwhile the top 20% of graduate earners can expect pay as much as £5,000 in a single year in their 40s.

This creates a dangerous moral hazard. By reducing the risk to graduates whose degree choices fail to pay off and shrinking the reward to the graduates who chose wisely, income contingent loans create an implicit subsidy to courses that offer lower economic returns.

This is not just a problem with the student’s incentives. From the perspective of a university £9k a year from a Media Studies student who’ll graduate to earn less than £21k a year is worth just as much as £9k a year from a Physics and Engineering graduate who’ll go on to found the next Tesla. In fact, because science courses on average cost more to run than humanities courses, the Media Studies student is actually a more profitable option for the university.

IFS data shows that while average university funding is up by 25%, there’s a big divergence in where it’s going. The highest cost courses have seen only a modest 6% increase, while lower cost courses have seen a 47% increase. The Government can and does mitigate this effect with funding grants, but those grants have shrunk across the board since tuition fees were trebled.

As student numbers are no longer capped, the unintended consequence of a progressive repayment system is that universities are incentivised to push low-cost, low-return arts courses at the expense of high-cost, high-return STEM courses.

Beyond alumni donations and looking good in league tables, universities fail to capture the upside of investing in the employability of their students. This needs to change.

One suggestion from Ryan Bourne resurrects a relatively untried proposal from Milton Friedman. Rather than students taking out loans, universities would buy shares in a student’s future income. If you’re a bright student with 3 A Stars at A-Level then Cambridge might offer to provide a full education in return for a 3% stake in your future earnings. Cambridge could then tap into that income stream right away by selling it on the open market.

In some ways, this model already exists. App Academy, a 12-week intensive coding bootcamp, doesn’t charge fees but instead requires that students pay App Academy 22% of their salary for the 12 months after graduating (provided they get a job as a software engineer). This gives them a strong incentive to link up with employers, assist in the job search and most importantly, teach coding well.

Moving to the App Academy model aligns the incentives of graduates, taxpayers, and universities. It would disincentivise institutions from admitting students to low-return courses and incentivise investment in more expensive STEM courses provided they paid off for graduates.

Data like the IFS’s that simply compares large blocs of people doesn’t take account of their pre-existing skills and abilities: perhaps they would have done better if they’d never gone to university at all. This system would give students an incentive to only do degrees that enhance their earning power—since they will have to tithe to their institution.

Unfortunately, this policy may be too radical; too big of a change overnight. But discussing and studying the ideal is useful because it can reveal the direction in which policy should move in the short-term.

Here’s a more modest step.

Students should make payments direct to universities not the Student Loan Company.

Under the status quo student loan repayments end up going to the same place, regardless of whether a student went to Oxford or London Metropolitan. In effect it means that once you graduate universities have no stake in you succeeding or failing.

We should change that. Rather than each graduate making repayments to the Student Loan Company, they should pay the university directly under the same repayment terms. Universities should then be allowed to sell off that future income stream for cash today.

This would make a difference for two reasons.

First, it’d create a strong market incentive to accurately estimate graduate earnings. In order to maximise revenue from selling off the rights to future repayments, the university will need to provide accurate, evidence-based estimates of graduate earnings. Similarly, financial institutions will rigorously test each university's numbers, ensuring that they’re accurate.

Second, it would give universities skin in the game: incentivising them to invest in employability, shut down courses that don’t deliver for students, and shift resources to high-payoff STEM courses.

The 'loan' could still be written off after 30 years, but it would be the university not the government writing it off. This would amount to a cut in funding for universities, but importantly it would be a cut that hit the least useful courses and institutions hardest and incentivised expanding the courses that pay off the most. Cutting university funding would be no bad thing either. Subsidised tuition encourages universities to over-invest in new infrastructure and prevents them from making the same efficiency savings that other similar sized organisations might make.

For instance, there’s no incentive to introduce shorter courses; university terms are still based on pre-industrial revolution agricultural seasons.

There is a risk that the courses cut will be disproportionately ones that attract pupils from disadvantaged backgrounds. But we shouldn’t be focused on inputs (i.e. number of disadvantaged students attending university) but outputs (i.e. will they get a good graduate job). The courses that get cut will look good if you simply measure inputs but bad if you measure outputs. There's nothing progressive about saddling a young person with massive debts to pay for a university education that does little to nothing to improve their life chances.

By shifting the burden of funding university tuition fully onto universities and students, it will free up additional public money that can be used to address educational inequalities earlier on - an approach that could be more fruitful.

If we want universities to serve their students better, they need to have skin in the game.

The Coming Moral Panic About Sex Robots

Last week, The Salvation Army released a statement expressing concern that sex robots could increase demand for sex work and sex trafficking. This particular moral panic seems a little premature; according to Nature, “just four companies, all located in the United States, currently produce [extremely basic] sex robots.” But this hasn’t stopped some social conservatives and feminists uniting in opposition to the potential spread of this emerging technology.

Unsurprisingly, rigorous academic studies into the effects of sex robots are extremely hard to come by. But the battle lines have already been drawn—anyone familiar with other debates relating to the sex industry (e.g. sex work and pornography) knows that this research area is plagued by motivated reasoning, blind speculation, and emotive anecdotes. Sadly, I do not think that the coming debate on sex robots will be any different.

British opposition to sex robots is led by the Campaign Against Sex Robots, spearheaded by robotics researcher Dr. Kathleen Richardson. In the position paper that launched the campaign, Dr. Richardson wrote that:

“The arguments that sex robots will provide artificial sexual substitutes and reduce the purchase of sex by buyers is not borne out by evidence. There are numerous sexual artificial substitutes already available, RealDolls, vibrators, blow-up dolls etc., If an artificial substitute reduced the need to buy sex, there would be a reduction in prostitution but no such correlation is found.”

But if there is no correlation between the availability of artificial sex substitutes and the amount of sex purchased, then this also rules out the possibility that sex robots will increase demand for the purchase of sex! For the moment, the arguments on both sides are speculative. Richardson states that “new technology supports and contributes to the expansion of the sex industry,” citing the growth of the sex industry spurred by the expansion of internet. To me, this seems like a very weak argument; there is an obvious, meaningful difference between the internet’s effects on human sexual commerce and sex robots’ potential effects on human sexual commerce.

My prediction is that, like sexual violence and pornography, the substitution effect will dominate. I also predict that this substitution effect will be larger for sex buyers who aren’t as interested in the mutuality aspect of commercial sex: arguably a more problematic group of sex buyers.

Let’s say I’m wrong, and that sex robots turn out to be a complement to buying sex. Would an increase in demand for sex work brought about by sex robots necessarily be a net harm to society? The whole ‘End Demand’ approach to sex work is fundamentally flawed, and any potential harms must also be weighed against potential benefits such as using sex robots to alleviate loneliness and assist in sexual therapy.

It’s also worth noting that whilst men are almost certainly going to be the primary market for sex robots, they aren’t the only group involved. Dr. Richardson briefly highlights this in her position paper:

“But the development of sex robots is not confined to adult females, adult males are also a potential market for homosexual males.”

Women of all sexualities are also likely to comprise a non-trivial proportion of sex robot owners. In the few surveys conducted into attitudes towards sex robots, women “answered positively about half as often” as men. The idea that some women may purchase sex robots as they become more widely available is not that farfetched.

In the coming years, the debate over the legal and societal approaches we should take towards sex robots will become more prominent. That conversation must include voices that emphasize their potential positive impacts and call out evidence-free scaremongering.

Review: The Political Spectrum by Thomas Hazlett

The Political Spectrum: The Tumultuous Liberation of Wireless Technology from Herbert Hoover to the Smartphone by Prof Thomas Winslow Hazlett. Yale University Press

Prof Thomas W. Hazlett, who recently spoke at the ASI, has accomplished something remarkable with The Political Spectrum. He's written a history of electromagnetic spectrum regulation that’s entertaining, inspiring, and has massive implications for the technologies of the future like driverless cars and drone delivery.

Hazlett, who served as the Federal Communications Commission’s chief economist in the early 90s, traces the history of the electromagnetic spectrum from AM Radio to the iPhone.

He starts by busting the founding myth of spectrum regulation, that without strict regulatory management was necessary to save radio from itself. Contrary to the established view, before the Federal Radio Commission (the FCC’s predecessor) existed the radio spectrum was not in chaos with a cacophony of radio stations blasting signals that drowned out rival broadcasts. In fact, there was a burgeoning market for AM Radio with hundreds of stations in operation resolving disputes and interference with nothing more than the principle of priority-in-use and the common law. It was Herbert Hoover, one of the book’s many villains, who put an end to that. Refusing to enforce property rights in order to create a justification for political control of the airwaves.

In place of a competitive, innovative market that served consumers and where government's sole role was to enforce and define property rights came the Federal Radio Commission and ‘Mother May I’. Innovators could no longer enter the market and many stations were booted off the air.

Rather than a system of tradable property rights, broadcasters instead had to get a license from the regulator and were forced to serve the ‘public interest’. Political broadcasters could get kicked off the air for offending the wrong politician, indeed the book notes that Nixon sought to use license renewals to punish the Washington Post’s parent company during the Watergate scandal.

Frequently the public’s interest was betrayed by regulators and lobbyists uniting to block innovations. The most tragic case is FM Radio. Developed in the 1930s by Prof. Edwin Howard Armstrong it delivered unprecedented sound quality. After eventually being granted permission to use the 42-50 mhz band, the FM radio became a must-have gadget. But Armstrong was hamstrung by nefarious lobbying by NBC and CBS.

They advanced the absurd view that FM Radio ought to be booted off its assigned frequency and relocated higher up the dial in order to prevent ‘ionospheric interference’ from sunspots. Leading radio frequency scientists rejected this proposal citing thin technical evidence. If anyone should have been concerned about sunspot interference, it would have been Armstrong – after all he had a substantial fortune riding on FM radio being a better product. The FCC however didn’t see it that way and pushed FM up the dial.

As a result, existing FM equipment from transmitters owned by stations to receivers owned by listeners was made obsolete overnight. It took Armstrong two years to develop receivers for the new bands, by the time he was finished few consumers wanted to invest in an expensive FM radio that could be made useless.

Only when the FCC approved stereo broadcasting for FM in 1960 (26 years after Armstrong had demonstrated it was technically feasible) did FM win out attracting audiophiles for high-fidelity listening. Tragically Armstrong didn’t make it that far. The failure of his technology and an acrimonious lawsuit lead to him walking out of the 13th floor apartment window. Overbearing regulators and greedy lobbyists denied the world one of its greatest inventors.

Hazlett’s book is full of similar stories where innovators are blocked by the FCC’s onerous approvals process. It wasn’t until 1959 when the book’s hero, Ronald Coase, wrote a paper on the FCC did things slowly change. At the time, his paper was ridiculed and editors criticised him for failing to address the problem of externalities (his 1960 paper addressing the problem is the most cited law review paper in history).

Coase thought the solution was simple. Rather than relying on bureaucrats to approve technologies and broadcasters, simply let the market work. Coase proposed that the FCC should auction off the spectrum to the highest bidder. It would incentivise the spectrum to be used in the most productive way possible. Innovators would no longer have to rely on lengthy approval processes, they simply had to purchase the rights to use spectrum on the secondary market. Mocked at the time, Hazlett's book tells the story of how one British economist changed overhauled the way spectrum was managed and allowing consumers to benefit from new innovations as soon as they were discovered (unlike the 26 year wait for FM Radio).

While many legacy uses still rely on strict regulatory oversight, today most countries have auctioned off swathes of the spectrum enabling the rapid adoption of smartphones. It’s a true victory for free market economics, but the fight’s not over. The public-sector still hoards large sections of spectrum that they’re too slow to clear and legacy users still benefit from overgenerous allocations of spectrum.

Prof. Hazlett’s book offers lessons that have clear implications for the future beyond spectrum. Brent Skorup and Melody Calkins have taken the insights of The Political Spectrum and applied it to the issue of drones and flying cars, two technologies that could revolutionise modern life. The open access standard where anyone can use low-altitude airspace will be under threat when new uses multiply.

Skorup and Calkins fear that the open-access standard will be replaced with the FCC style regulation that Hazlett conclusively demonstrates retards innovation:

1. First movers and the politically powerful acquire de facto control of low-altitude
2. Incumbents and regulators exclude and inhibit newcomers and innovators
3. The rent-seeking and resource waste becomes unendurable for lawmakers,
4. Market-based reforms are slowly and haphazardly introduced

They suggest a Coasean alternative - auction off airspace and then allow a secondary market to develop. Parcels of airspace could be combined, split-up, subleased and sold off allowing innovators to enter and leave the market with the best uses winning out.

Today, Skorup and Calkins idea might be ridiculed – but then again so was Coase. It wasn’t until he was well within old age that Coase’s market in spectrum was vindicated (he received the Nobel Memorial Prize aged 81). Let’s hope that that we don’t have to wait so long for a market in airspace.

If economics is just a religion then worth getting the tenets right

The Guardian has a long read about how economics is just a religion, doncha' know, and therefore something or other. The thing being that if you want to claim something is a religion then it's worth working out the details of what the tenets are. We do not comment upon Methodism for example, we do not know what split the Primitive and Wesleyan types nor what prompted them to become United either. We do not wade into such debates therefore.

This has not stopped John Rapley of course:

Although Britain has an established church, few of us today pay it much mind. We follow an even more powerful religion, around which we have oriented our lives: economics. Think about it. Economics offers a comprehensive doctrine with a moral code promising adherents salvation in this world; an ideology so compelling that the faithful remake whole societies to conform to its demands. It has its gnostics, mystics and magicians who conjure money out of thin air, using spells such as “derivative” or “structured investment vehicle”. And, like the old religions it has displaced, it has its prophets, reformists, moralists and above all, its high priests who uphold orthodoxy in the face of heresy.

There is no moral code in economics, it is a positive endeavour, not a normative one. We can thus rather throw the concept out at the beginning. But there is also that ill-knowledge of what it does in fact say:

Once a principle is established as orthodox, its observance is enforced in much the same way that a religious doctrine maintains its integrity: by repressing or simply eschewing heresies. In Purity and Danger, the anthropologist Mary Douglas observed the way taboos functioned to help humans impose order on a seemingly disordered, chaotic world. The premises of conventional economics haven’t functioned all that differently. Robert Lucas once noted approvingly that by the late 20th century, economics had so effectively purged itself of Keynesianism that “the audience start(ed) to whisper and giggle to one another” when anyone expressed a Keynesian idea at a seminar. Such responses served to remind practitioners of the taboos of economics: a gentle nudge to a young academic that such shibboleths might not sound so good before a tenure committee.

Which is why absolutely every government, Treasury and central bank model works on broadly New Keynesian principles. 

The data used by economists, however, is much more disputed. When, for example, Robert Lucas insisted that Eugene Fama’s efficient-markets hypothesis – which maintains that since a free market collates all available information to traders, the prices it yields can never be wrong – held true despite “a flood of criticism”, he did so with as much conviction and supporting evidence as his fellow economist Robert Shiller had mustered in rejecting the hypothesis. When the Swedish central bank had to decide who would win the 2013 Nobel prize in economics, it was torn between Shiller’s claim that markets frequently got the price wrong and Fama’s insistence that markets always got the price right. Thus it opted to split the difference and gave both men the medal – a bit of Solomonic wisdom that would have elicited howls of laughter had it been a science prize.

The EMH does not insist that market prices cannot be wrong. It says that given the information available prices will be right given the information available. Yes, it is tautologous. Further, Shiller's claim is that incomplete markets will not incorporate all known information, complete ones will. Thus his insistence that a futures market allowing one to go short on housing would have tempered the US housing bubble. And yes, Shiller is a very useful extension to Fama, that's why the joint prize.

This amuses:

For example, Milton Friedman was one of the most influential economists of the late 20th century. But he had been around for decades before he got much of a hearing. He might well have remained a marginal figure had it not been that politicians such as Margaret Thatcher and Ronald Reagan were sold on his belief in the virtue of a free market. They sold that idea to the public, got elected, then remade society according to those designs. An economist who gets a following gets a pulpit. Although scientists, in contrast, might appeal to public opinion to boost their careers or attract research funds, outside of pseudo-sciences, they don’t win support for their theories in this way.

Friedman got the Nobel in 1976, rather before either of those two were elected to national office.

At which point:

The 2008 crash was no different. Five years earlier, on 4 January 2003, the Nobel laureate Robert Lucas had delivered a triumphal presidential address to the American Economics Association. Reminding his colleagues that macroeconomics had been born in the depression precisely to try to prevent another such disaster ever recurring, he declared that he and his colleagues had reached their own end of history: “Macroeconomics in this original sense has succeeded,” he instructed the conclave. “Its central problem of depression prevention has been solved.”

No sooner do we persuade ourselves that the economic priesthood has finally broken the old curse than it comes back to haunt us all: pride always goes before a fall. 

Well, yeeees, We did in fact prevent a depression (usually defined as a 20% fall in GDP) by noting that the first one was caused by the actions of the Federal Reserve. So, we didn't do that again, instead we did QE and so on, our solution coming from the work of Milton Friedman and Anna Schwartz in their Monetary History of the United States, published in 1963. Which, umm, sounds like a solution to us, a problem met and solved.

There are undoubtedly religious beliefs in certain forms of economics, the magic money tree coming to mind, the labour theory of value and so on. But to be able to critique the tenets one should actually know them, something not greatly in evidence from this new book.

The EU-Japan trade deal means we can be a bit more zen about Brexit

In 2013 the European Union and Japan began talks aimed at a free trade agreement. On Monday both parties said they’d made strong progress as a number of key obstacles on tariffs and protections.

At last! After 18 rounds of talks, the political will to finalise the deal had been found and success was in sight.

Across the world headlines heralded what Japanese Prime Minister Shinzo Abe described as “the birth of the world’s largest, free, industrialised economic zone,” and what the European Union described as “the most important bilateral trade agreement ever concluded by the EU”.

A deal this big is a big deal, it’s one that the UK has been integrally involved in as part of the EU and which it has strongly pursued. There are some really large barriers that are being brought down that have been making our lives more expensive and while we're party to the agreement we should celebrate these.

It’s also, crucially, a hint about what the UK can get from our own negotiations. Not because the UK wants a deal like Japan’s - it’s nowhere near the level of access that we have now. But it shows the EU is looking to promote free trade and have bespoke deals.

There’s also a nice omission that might just work to our advantage later on.

Some of the important parts of this agreement:

•    Tariffs on more than 90% of EU Exports to Japan will be removed. This means in the end that 97% of all goods duties will have been removed and all other tariff lines are subject to partial liberalisation (through quota growth or tariff reductions).
•    Japanese and EU made cars will now have safety and environmental standards that mean certification and testing for exported cars will not be needed. This simplifies exports massively.
•    Medical devices in Japan will be subject to the Quality Management System international standard which will speed up licensing.
•    Most advanced provisions on movement of people for business the EU has negotiated so far. Covers all traditional categories of intra-corporate transferees, business visitors for investment purposes contractual services suppliers, independent professionals and new short-term business visitors and investors.  
•    EU Commissioner for Jobs, Growth and Investment – Jyrki Katainen – has said that investment may be left out of the deal due to no agreement around arbitration.

This last point is really important. While I would normally complain about the lack of agreement on investment arbitration, in this case it shows flexibility on the EU’s part – as well as an understanding that their preferred long-term solution of a global multilateral court is probably off the table for the time being.

This is actually really beneficial for the UK; our arbitration courts (including the London Court of International Arbitration) are global leaders. An attempt by the EU to shoehorn continental control of arbitration into international treaties could potentially threaten London’s pre-eminent position for arbitration as we leave the EU. While the EU commits to pushing for it in future their current failure to get it into any large deal means it’s quite possible for the UK to refuse it in any future Brexit deal too.

Barriers to trade coming down is always something to celebrate and it reminds us that our largest trading partner - the EU - is out there looking to liberalise trade. It tells us also that there's appetite among our allies like Japan to have new trade deals too. We can use this in future talks.

There’s the willpower to have deals, the wherewithal to be flexible and the ability to have bespoke agreements. The UK should be confident walking into the Brexit negotiations and beyond.

The IFS introduces us to the blindingly obvious

We're told that children in single earner families are more likely to be in poverty than those in dual earner families:

Families who rely on a fathers’ earnings alone are at greater risk of poverty than other households, with average incomes stagnant for the past 15 years, according to analysis by the Institute for Fiscal Studies.

The IFS said that because the father works in most single breadwinner households, those families have not benefited from the relatively large increases in women’s earnings since the mid-1990s.

That all seems blindingly obvious, doesn't it? 

No? Allow us to explain. The modal couple household arrangement is both working full time, 66 % of couple households have both working at least part time.

Poverty is being measured as relative poverty, less than 60% of median household income adjusted for household size (and possible variations like disposable income after taxes, after or before housing costs etc).

The norm, therefore, is for one and a bit to two earners in a household. Those with only the one earner are going to earn relatively less, we measure poverty as being relative income, who is surprised at this finding?

Note that this is what is driving this finding. The connection with fathers is just because we Brits are so traditional, where only one works it tends to be the man.

It's also worth noting that as long as we measure poverty both by household income and in relative terms there's no real cure for this. Unless we want to go back to those bad old days of fathers being given a pay rise just because, well, you know, they're fathers you see, they have to provide?

Economic possibilities for our grandchildren, video games version

Famously, John Maynard Keynes predicted in 1930 that in a few generations time people would only work 15 hours a week: productivity would have risen so much that higher living standards would be possible with less work.

He thought that people would use higher productivity (and the resultant higher pay per hour) to work much less, and consume much more leisure. But that didn't quite happen: labour hours did fall, but much much slower than he expected, despite productivity growing about as much as he thought it would. People wanted more consumer goods, as well as more services, than he thought was likely.

He (and his followers, the Skidelskys) thought it was an appreciation for the higher pleasures—like contemplation and philosophy—that would eventually take up leisure time.

But it seems that it is artificial reality, in the form of ever higher quality video games, that is the first use of leisure tempting enough to really stop men working, at least according to the work of Erik Hurst & collaborators. Here's the abstract of their new paper:

Younger men, ages 21 to 30, exhibited a larger decline in work hours over the last fifteen years than older men or women. Since 2004, time-use data show that younger men distinctly shifted their leisure to video gaming and other recreational computer activities.

We show that total leisure demand is especially sensitive to innovations in leisure luxuries, that is, activities that display a disproportionate response to changes in total leisure time. We estimate that gaming/recreational computer use is distinctly a leisure luxury for younger men.

Moreover, we calculate that innovations to gaming/recreational computing since 2004 explain on the order of half the increase in leisure for younger men, and predict a decline in market hours of 1.5 to 3.0 percent, which is 38 and 79 percent of the differential decline relative to older men.

See also two Marginal Revolution posts on the phenomenon, from Tyler Cowen and Alex Tabarrok. I seem to remember a whole load of folks attempting to ridicule them for believing in this at the time, but the evidence does seem to be getting stronger and stronger.

Sorry Adonis, but there's no tuition fee cartel

On Friday, Lord Adonis had an op-ed published in The Guardian in which he conducts a volte face concerning tuition fees. Specifically, although he was responsible for increasing tuition fees from £1,000 per year to £3,000 per year, he now believes that they should be scrapped entirely on the (actually spurious) grounds that graduating students now leave university with about £50,000 of debt.

Notwithstanding the lack of a cogent argument for scrapping tuition fees, Adonis makes the highly-charged allegation that universities are running a cartel because a large number of them set fees at or close to the £9,000 maximum that is permitted. This allegation shows a shocking lack of understanding of the meaning of the term “cartel” and the jurisprudence underlying previous instances in which cartels were found to be operating.

In short, a cartel (as per Article 101 of the Treaty on the Functioning of the European Union) is defined as a group of firms that restrict or distort competition in a market by, for example (but not limited to), directly or indirectly fixing prices, limiting or sharing production / output, and so on. As such, Adonis is claiming that the universities are fixing prices when they all set their fees close to £9,000 and claims that he has asked the Competition and Markets Authority (CMA) to investigate this matter.

However, Adonis ignores the established and standard guidance for determining whether or not a cartel is actually in operation – simply taking the fact that a number of universities are pricing at a focal point is not evidence in and of itself. Instead, as per the guidance set out by the then Court of First Instance in Airtours/First Choice, three conditions are necessary for a cartel finding.

  1. The market must be transparent – i.e. the area (such as prices, volumes etc.) over which a cartel agreement is made must be able to be monitored easily and (relatively) costlessly for the members of the cartel. In the case of university pricing, this condition probably is satisfied as a university’s fees are published on their website and therefore can be monitored by other universities. However, it is the only one of the three conditions that is satisfied.
     
  2. More importantly, the members of the cartel must be able to ”punish” any cartel member that does not adhere to the cartel agreement. This is one area where Adonis’ claim falls down – there is no punishment mechanism available for universities to punish those that deviate from a cartel agreement. To see this, suppose that there was an agreement between universities to maintain fees at £9,000. Now suppose that one university that had made this agreement instead decided to cut its fees to £4,000 – this “deviating” firm might have a strong incentive to do so since that could enable it to get more students applying to it, enabling the university to have a wider range of students from which it could select the best candidates.

    In this scenario, how would the non-deviating universities be able to punish the deviator? They couldn’t decrease their own fees because 1) those fees are set some time in advance so cutting them as a rapid response is not really feasible; and 2) that would simply decrease those universities’ own revenues anyway, without the likely prospect of recouping that loss in future, such that doing so would be cutting off their nose to spite their face. Moreover, since the impact of the punishment would arise a year later (i.e. when the next set of students were applying to university), the punishment itself would not provide a strong disincentive to prevent the deviator from cutting fees in the first place.
     
  3. Any “external” competition (i.e. options other than going to the universities in the supposed cartel) must be weak. Again, Adonis’ argument fails on this criterion too - although the vast majority of UK universities are charging fees at the upper limit, the fact is that there are multiple outside options that provide a competitive constraint: 1) private universities are increasing in size and coverage and are often a viable alternative for students; 2) prospective students can choose not to go to university, but instead attend a technical college or other institution; and 3) future students can choose to study in foreign universities and are doing so in ever greater numbers. In other words, there are external options that constrain the ability of UK universities to cartelise fees.

Therefore, universities in the UK do not seem to satisfy two of the three conditions required for a cartel to be present. Hence, it is highly likely that Lord Adonis’ claim that UK universities are operating a cartel is baseless and will be given short shrift by the CMA.