Regional inequality: also a housing supply problem

Over the past few years compelling evidence has come out suggesting that housing policy errors are behind slow productivity growth, wealth inequality, and the cost of living crisis. But a raft of new work suggests housing policy errors might also be a key explanation for rising regional inequality in the West, and more specifically the rise of the Englands South East and the relative decline of the Midlands and North.

A speculative and controversial new working paper from Greg Clark and his collaborator Neil Cummins looks far back to determine why the North, the site of the UK's industrial revolution, has declined relative to much of the country, especially London. He looks at a genealogy of 78,000 people and finds that people with distinctively Northern surnames actually have not lost out relative to those with distinctively Southern surnames.


But many of the best and brightest have moved to London, and this has accelerated in recent years, even into the late 2000s, just as the productivity gap has opened up. He links these two phenomena together: being unable to keep their smartest has hampered the success of commerce and industry there. Large fiscal transfers make relatively little long term difference if they don't attract people. And one-size-fits-all policies (like national pay bargaining) that might be thought to help poorer areas actually worsen the problem, incentivising people to queue for the far more generous pay that the public sector offers in poorer areas.


What Clark's work lacks is an explanation. Why has the migration South been skill-biased? Why has it sped up so much recently? A recent literature suggests that restrictive housing supply, which raises rents (and house prices), deters low-skilled movers much more than high-skilled movers, raising the average talent in the host city and cutting it in the sending city. This compounds itself, because the move itself affects the desirability of the places to live in, especially for the high-skilled.


The data comes from the USA, but the situation is the same. Between 1880 and 1980 per capita income between US states converged steadily and significantly. If you heard that things were better somewhere else, you upped sticks and moved. It's the point of America! But convergence fell from nearly two percent per year to under half of that in the last two decades. And a new NBER working paper points the finger directly at falling migration.

The mechanism we propose for explaining the decline in income convergence can be understood through an example. Through most of the twentieth century, both janitors and lawyers earned considerably more in the tri-state New York area (NY, NJ, CT) than their colleagues in the Deep South (AL, AR, GA, MS, SC). This was true in both nominal terms, and after adjusting for differences in housing prices. Migration responded to these differences, and this labor reallocation reduced income gaps over time.

Today, though nominal premiums to being in the New York area are large for these two occupations, the high costs of housing in the New York area have changed this calculus. Lawyers continute to earn much more in the New York area in both nominal terms and net of housing costs, but janitors now earn less in the New York area after subtracting housing costs than they do in the Deep South.

This sharp difference arises in part because for lawyers in the New York area, housing costs are equal to 21% of their income, while housing costs are equal to 52% of income for New York area janitors. While it may still be worth it for lawyers to move to New York, high housing prices offset the nominal wage gains for janitors.

This effect only appears in places that regulate land use and housing supply tightly. Another paper, by Andrii Parkhomeno, puts 23% of the rise in wage dispersion down to housing issues. In the forty years to 1940-1980 inequality both regional and overall income inequality fell, with migration explaining a huge chunk of this. The opposite has happened more recently both in the USA and the UK, as the incentive and ability to move has crashed.

The world of 2017 has a lot of big problems. Clearly some of them (dealing with new technology, North Korea, and terrorism) are things that reforming housing has little to do with. But many of the most important issues today are almost entirely driven by housing problems. At Berlin or Tokyo rents per square metre my life would be completely transformed. I could save go on an extra holiday a year, save up for a house, put more into my pension, and go out for dinner more—all at the same time.

If this flow were driven by inevitable socio-economic factors then maybe the infamous paper would be right: we should just abandon declining places and move their residents. But it looks like regional decline is a policy choice just as the productivity puzzle, wealth inequality, and high rents are a policy choice. We abosolutely can do something about them.

Margins and monopoly

Last week, a new working paper purports to provide evidence supporting the idea that overall market power exhibited by firms in the US has increased over the past 30-40 years.  This was then picked up by a few blogs/media outlets.  In short, the paper claims that there has been an increase in overall profit margins (or mark-ups) since 1980 and, therefore, this implies that overall competition between firms has decreased and that antitrust enforcement is not working.

The initial basis for the paper’s argument is that under 'perfect competition;, firms are supposed to charge a price equal to their cost. In standard economic theory, 'perfect competition' is a rather idealised scenario in which there are many buyers and sellers, each of which is a price-taker (i.e. no individual buyer or seller has any influence over the price of the product). Under these, and a few other conditions, in a 'perfectly competitive' market, the price of producing a unit ends up being equal to the cost of producing that unit – i.e. each producer in such a market makes zero economic profit (the 'economic' nature of the profit, as compared to accounting profit, is a crucial distinction).  The paper then compares this to the standard economic theory of monopolistic pricing, in which one firm is the sole producer of a product and therefore can charge a price above cost – i.e. the monopolist obtains a positive profit margin on its sales. The paper then claims that the fact that its data indicate an increase in margins over time mean that the US economy has (in the aggregate) moved away from the 'perfectly competitive' scenario and towards the monopolistic scenario, thereby implying a reduction in the overall levels of competition in the US economy.

Unfortunately the paper fails to take into account a number of factors. For example, and at a rather basic level that the paper’s authors should really be getting right, the “costs” in the theoretical perfectly competitive market do not coincide with measures of costs that are calculated in companies’ accounts.  In particular, economic costs include an 'opportunity cost' of using the resources for the next best option – i.e. economic costs include an additional element beyond the balance-sheet cost of using/purchasing the input that is not usually picked up in accounting measures of cost. Under the perfectly competitive model, although there is no difference between the price of a product and the economic cost of producing it, there is likely to be a difference between said price and an accounting measure of cost – in other words, even under the perfectly competitive model, individual firms are likely to make some positive accounting profits.

Despite this, the paper goes ahead and calculates margins (and makes inferences thereof) using accounting cost - the paper uses the accounts of publicly-traded firms in the US over the period 1950-2014.  In other words, the paper automatically fails to measure the true margin as is relevant for economic theory. Hence, any attempts to link an increase in the margins in this paper to the competitive landscape as suggested by economic theory is flawed.

Even if the observation that margins have increased was valid (i.e. the margin was calculated appropriately including the economic cost of production), then that is still insufficient to support the paper’s claims that overall competition has decreased.  In essence, by making such a claim based on the path of margins, the paper is claiming that the entirety (or, at least, the vast majority) of the increase in margins was due to a decrease in competition, thereby ignoring any other factors that could have resulted in an increase of margins over time.  Although the paper looks at one other factor that could explain the change in margins (changes in the average size of firms), the paper ignores factors such as changes in the types and nature of industries over time (e.g. some industries might have much higher up-front costs and lower marginal costs, and if those types of industries grew over time, then that could explain the increase in average margin without any change in competition levels).

On a more technical note, margins can also be related to price elasticities of demand via the “Lerner Condition” – this is a mathematical relationship stating that a firm’s margin is inversely proportional to its own price elasticity of demand. Obviously, different industries can have different demand elasticities regardless of the level of competition in each industry and, as such, margins can differ due to that reason as well. This is particularly relevant if, as seems likely, the composition of the economy (in terms of which industries are most prevalent) has changed over time.

Worse still is that the paper’s result of margins increasing over time is likely to be affected by a “survivorship bias”.  Specifically, as the paper tracks firms over time, clearly the more successful firms (the ones that survive and earn higher profits) will remain in business, while the less successful firms (the ones that go bust due to making lower profits) will exit the market. Consider a stylised example: suppose at the start, an industry consists of six firms of equal size in terms of revenues, but five firms each make a margin of 30%, and the sixth firm makes a margin of 1%. At the start, the average margin would be about 24%. Now suppose that the owner of the firm obtaining a margin of just 1% decides that they can do better in another industry, so decides to shut down – this means that the average margin would increase to 30% despite there not being any real decrease in the level of competition in the industry (as the five remaining firms would still compete against each other).

Hence, over time, one would expect the sample of firms over which the margin has been calculated to contain mainly successful firms and to lose the less successful firms, thereby resulting in an increase in average margin over time. The paper does not seem to have tried to account for this. (Note, too, that firms going bust is a sign of healthy competition – a more efficient firm is able to outcompete a less efficient firm such that the less efficient one stops trading.)

Overall, therefore, although the paper claims that 1) there has been an increase in margins over time; and 2) this implies that industries in the US have become more monopolistic over time, those claims do not stand up to scrutiny. Indeed, the paper’s approach to demonstrating such claims is flawed at the most basic level.



No wonder The Guardian is a little odd on economics, tax and business

Phillip Inman is the economics editor for the Observer and an economics writer for The Guardian. Within his latest piece we can get an inkling of why the newspapers can be quite so odd when discussing economics, business and tax.

He is right in part of course, there are indeed some 1,000 or more various tax breaks out there. Many of them somewhere between silly and risible. We can and probably should pare many of them back. So far so good.

He then suggests that government could thereby raise lots more tax revenue, we disagree there, we think that we should do so to lower tax rates more generally by removing those special privileges. But that's a difference of opinion. It this next which explains the general oddity perhaps: 

Companies would drop plans to buy the equipment they need without tax relief. Likewise, private landlords would let their flats go unmaintained if the cost were not tax-deductible.

Private landlords are taxed as if they are a business on the useful grounds that it is a business. And businesses are taxed upon their net income. The revenues from being in business minus the costs of being in business to generate that revenue. There is no other logical manner of taxing business either.

Where the confusion comes in is that there are indeed various "tax reliefs" here. But they are rules not stating that you may deduct your costs, as you must be able to do. Instead, they're rules about how quickly you may deduct your costs. That is, you may not deduct them when you've spent the money but rather as what you've spent it upon starts to wear out, depreciation rules and all that.

If the taxation of business simply worked on a cash basis then we wouldn't have those reliefs and so on at all. There wouldn't even be any difference in the amount of tax paid, only in when it is paid.

As we say, no wonder we see economic, business and tax howlers in those newspapers if this is the supposed adult in the building, the one who is meant to know these things among the assembled snowflakes and arts graduates.

Poland Spring and the perils of the regulatory state

Some things do indeed need regulating--others can be regulated but should not be. The difficulty of course is in deciding which must be and which should not be. A general rule of thumb coming from game theory.

We humans are pretty good at dealing with all those things like the prisoners' dilemma if they come in games of multiple iterations. The ultimatum game, for example, the explanation of why people will punish "unfair" distributions, comes from the manner in which life itself is indeed a series of multiple interactions. Perhaps these days with different people, but in the core of what created humans with the same small group over the years. That is, we train people to be fair through punishing them when they are not--a process that only works if we're with those multiple iterations.  

From this we can extract our rules about regulation. Where the decision to purchase, say, is something done multiple times with many choices we can leave regulation to the market. Flavours of toothpaste for example, we do not need to define what such may be made. Consumer pressure will regulate matters just fine. Something which is done once or twice in a lifetime, the purchase of a pension, or a mortgage, will probably require more oversight.

At which point the claims against Poland Spring water:

Eleven consumers filed a class action lawsuit this week against Nestlé Waters North America, Inc., alleging the company's Poland Spring Bottled Water is "a colossal fraud."


The group argues Poland Spring water has been "a colossal fraud perpetuated against American consumers" since it was first bottled in 1993. They claim that "not one drop" of the water complies with the Food and Drug Administration's definition of what constitutes spring water, and instead is ground water.


No, we don't know either. But we do have the regulation of what is spring water, the one that must be met:

The name of water derived from an underground formation from which water flows naturally to the surface of the earth may be “spring water.” Spring water shall be collected only at the spring or through a bore hole tapping the underground formation feeding the spring. There shall be a natural force causing the water to flow to the surface through a natural orifice. The location of the spring shall be identified. Spring water collected with the use of an external force shall be from the same underground stratum as the spring, as shown by a measurable hydraulic connection using a hydrogeologically valid method between the bore hole and the natural spring, and shall have all the physical properties, before treatment, and be of the same composition and quality, as the water that flows naturally to the surface of the earth. If spring water is collected with the use of an external force, water must continue to flow naturally to the surface of the earth through the spring's natural orifice. Plants shall demonstrate, on request, to appropriate regulatory officials, using a hydrogeologically valid method, that an appropriate hydraulic connection exists between the natural orifice of the spring and the bore hole.

We would assume that since these are the regulations then Nestle meets them although obviously, given the suit, perhaps not. Our point though would be that this isn't something that requires such regulation. Bottled water is a many times purchase, something that can thus be left to consumer pressure to regulate.

It's water. In a bottle. Want some?

That would seem to cover it for us.

No, we simply do not believe this scare story, sorry Polly

More normally Polly Toynbee tells us that we should just put up with the fact that the NHS won't treat everything, won't buy every possible medicine and cure. On the grounds that every health care system does ration and that yes, rationing by price is the sensible way to do this. Those treatments which gain more quality life years per pound should be funded over those which gain fewer, that's how we maximise the effect of the scarce resources available for health care.

Sadly, Polly seems not to have the courage of her convictions as she presents this scare story to us:

Little by little services vanish. Prof Azeem Majeed, head of primary care and public health at Imperial College and a Lambeth GP, has just blown the whistle in the British Medical Journal on the latest withdrawal of a service: many clinical commissioning groups (CCGs), including his own, are banning GPs from prescribing anything that can be bought over the counter. Bristol, Lincolnshire, Dudley, Telford and Essex are among many others issuing the same edict.

At first glance it makes sense not to prescribe what most people can get for themselves, until you consider poorer patients who can’t afford the 22 drugs now banned for prescribing.

Well, yes, it does make sense that the NHS not try to provide those things which are easily available elsewhere. As with that Scottish nonsense of a Government Tampon Distribution Service, it's a grossly expensive method of doing something which can be done very much more cheaply in another manner.

And now consider what is being complained about:

Majeed says “Low-income families often can’t afford ibuprofen, or gluten-free products for coeliac disease sufferers. A single mother on low pay with two children can’t afford the £10 it would cost for nit treatment.” 

We don't know about nit treatment but gluten free? As the BMJ itself has pointed out:

On the other hand, James Cave, a general practitioner, argues that “it’s ludicrous for the NHS to be treating a food product as a drug and to require GPs and pharmacists to behave as grocers.” He says the “complex rules” imposed by the NHS governing what can be prescribed and how often are stressful for people with coeliac disease and their GPs. “It’s a time-consuming rigmarole and, for the NHS, a very expensive one,” he argues. “The eight basic gluten-free staples advised for people with coeliac disease are all cheaper from a supermarket than the NHS price,” he explains. “This is a scandal.”

The NHS pays up to £6.73 for 500g of pasta, yet 500g of gluten free pasta will cost £1.20 at a supermarket. Additionally, there is a dispensing fee which is charged on top of all prescriptions. “If we stopped prescribing gluten-free products tomorrow GPs would shout for joy and the NHS would stop being ripped off,” he says.

NHS distribution of such things is a method of wasting scarce resources, nothing more. And as to the ibuprofen, Boots Online will sell you 16 200 mg tablets of the stuff for 35 p. Yes, 35 pennies, less than the cost of a packet of crisps these days. This is not something which needs the might of the state to distribute in an efficient manner.

Sorry, we just don't believe you Polly, just don't believe you.

A different way of thinking about Net Neutrality

"Net Neutrality" is the idea that internet service providers (ISPs) should not be able to prioritise some traffic over others even if they are willing to pay more. For example, Netflix cannot be charged more for its highly-demanded content than, say, BBC4. Or, looked at another way, people who watch a lot of Netflix cannot be charged more for the additional network congestion that creates than people who watch one programme a month.

ISPs can and do charge for total data downloaded – Net Neutrality refers to the bandwidth a particular provider and consumer use. With Net Neutrality, ISPs would not be allowed to create special 'fast lanes' for content like Netflix that users or Netflix itself could pay more for, with cheaper 'slow lanes' for content that is less bandwidth- and speed-sensitive (like this humble website).  

Many large internet content providers are pro-Net Neutrality, especially the ones that provide high-volume content. Many users are too, with websites like Save the Internet claiming that "The internet without Net Neutrality isn’t really the internet". 

I find this all quite odd – in my experience, absent the histrionics about 'saving the internet', most normal people are surprised that anyone objects to the idea of charging heavy users more.

Net Neutrality might best be understood as a redistribution from light users to heavy users. An enjoyable new paper from Keith N. Hylton attempts to explain the mechanics, and potential side-effects, of Net Neutrality by way of analogy – a toll bridge used by commuter traffic and heavy lorries. 

If the lorries add more congestion and do more damage, it would be efficient to charge them more so that they bear those costs. But what might happen if we banned that kind of discrimination?

Charging cars and trucks different prices would permit the bridge owner to internalize to truck owners the additional costs imposed by the trucks. This, in turn, would discourage the trucks from excessive use – for example, from imposing a marginal cost of $1 on the bridge owner and other users when the marginal benefit to the truck owner from the particular use is only $.50. A charge that varied with the intensity of the use would encourage truck owners to consider the congestion costs and the miles of wear and tear imposed in each relevant time period. The higher charge would also induce some truck owners to avoid the bridge in favor of another route. Over time, charges might encourage technological innovation toward trucks that carry the same freight while imposing lower congestion and depreciation costs.

Charging separate prices allows the bridge owner to reduce congestion and depreciation costs, and pass those cost savings on to consumers in the form of lower general prices (for an equivalent unit of service) for use of the bridge, which, in turn, would increase the total consumption of the services offered by the bridge.

Admittedly, in some cases the bridge owner might choose not to charge differential prices. Perhaps the differences in service costs are minor, and the administrative costs of differential pricing exceed the efficiency gains. Alternatively, perhaps trucks provide the greatest source of demand for new bridge capacity. Foresighted bridge owners would therefore be reluctant to tax a major source of industrial capacity growth. In these cases, the bridge owner may choose not to impose differential pricing even if completely free to do so.

The bridge analogy seems to apply straightforwardly to the net neutrality problem. Net neutrality is equivalent to prohibiting the bridge owner from using differential pricing, and generates similar costs. Some providers of internet content, such as Netflix, impose extraordinary congestion costs as a result of the internal subsidy from consumers of other internet services.

Hence, permitting the network owner to price differentially can and probably would enhance consumer welfare. To the extent that heavy use of the service has a depreciation effect (electrical components suffer wear and tear from use), similar costs are imposed.

The whole paper is very readable and enlightening, and you can read it here.

At some point we're going to stop this delusion - why not now?

As Charles Mackay famously pointed out we humans are prone to sweeps of delusions and madnesses among us, especially as a crowd. This is often taken merely as a warning about financial markets and investment schemes but that's to take too narrow a view. Society can be swept along on such currents in other directions too, the same human fallibility affects other fields.

Such as this:

Supermarkets, restaurants and takeaways will be asked to shrink thousands of products or find other ways to cut their calorie content as part of a Government crackdown on junk foods.

Pizzas, ready meals, crisps and burgers are being targeted by health officials in a national plan to combat obesity.

Manufacturers will be set sweeping targets in a bid to reduce the daily calorie intake of millions of consumers, and tackle Britain’s growing weight problem.

The specific recommendations on calories have yet to be drawn up, but are likely to be modelled on existing targets agreed by manufacturers to cut sugar in cakes, biscuits and chocolate by 20 per cent by 2020.

Health officials said they will now work with the food industry to agree plans to tackle “excess calorie consumption” in a host of savoury foods - especially those regularly consumed by children.

What started out as a complaint about sugar in soft drinks morphed into sugar more generally and now they're coming after savouries.

As Chris Snowdon at the IEA has conclusively proved, as just the general records show, we do not eat more calories now than our forefathers, quite the contrary, we eat fewer. We expend fewer again, that's what is the cause of the weight gains.

At which point we've people trying to violate Hayek's maxim, by micromanaging the food supply. Yea even unto the size and composition of a slice of pizza. This never will work as there can never be enough information at that centre to enable the planning. This is before we even dream of that other thing about humans, that they will change their behaviour in the face of changed incentives. You know, have two instead of one of those smaller slices.

This is all another of Mackay's extraordinary delusions and like all madness of crowds it will come to an end. Why don't we start that end right now? Put Public Health England back in their box and stop this nonsense of trying to plan the food supply of a nation?

The Soviets never managed it however hard they tried for 70 years, it's not going to work now, is it? 

So, what should we do in the absence of knowledge?

If we don't have the information, just not a scoobie of the knowledge required, to make a decision then what is it that we should do? Sensible folk would probably say we should go and find out before we make our decision:

The rise of the UK’s nascent shale industry is "overhyped" and 55 million years too late, according to new research of the UK’s geology.

A team of scientists has warned that the UK’s most promising shale gas reservoirs have been warped by tectonic shifts millions of years ago which could thwart efforts to tap the gas reserves trapped within layers of shale.

Professor John Underhill, a chief scientist at Heriot-Watt University, said the debate over whether or not to develop domestic gas sources could prove redundant because Britain’s shale layers are “unlikely” to be an economic source of gas.

OK, excellent. There's a scientific prediction. What is it that we do when using the scientific method? We attempt to design experiments to disprove the assertions being made. If they survive such attempts at disproof then we upgrade assertions and speculations into something quite possibly true. That is, we attempt to go and find out. 

So, what should be the reaction to this assertion

Quentin Fisher, a professor of petroleum geoengineering at the University of Leeds, said more work was needed as the disadvantages pointed out in the seismic imaging could be balanced by other factors with an advantage for shale extraction.

“Prof Underhill is quite correct to highlight the great uncertainties that exist regarding the likely productivity of shale in the UK and is correct that the geology in the UK tends to be structurally more complex than in the US. Many of us involved in this debate have regularly highlighted the large uncertainties that exist,” Fisher said.

“Although geological complexity and late tilting may be detrimental to shale gas prospects in the UK, there are other factors that may be more favourable, such as having thicker shale sequences.”

He said the only way to find out was through testing. “The bottom line is that the only way to truly assess the viability is to drill wells, and we need to get on with that.”

Well, yes, quite so. We've now got duelling theories and the only way we can decide between them is to go drill. So, go drill we should.

We all know how Professor Underhill's speculations are going to be used of course. The anti-frackers will be shouting that there just ain't any gas there so instead let's continue with the cucumber storage of moonlight scheme. When the correct response is as above. If there's gas there then we're copacetic (we,. not the anti-frackers), if there isn't then, well, so let's go find out.

There is a similarity here with a point made about climate change. The greater the uncertainty about how bad the effects will be the more careful we've got to be about it happening. Certainly true but the same logic applies here. The more the uncertainty about the shale gas contents of Britain the more the answer is drill baby, drill.

How much do refugees cost the taxpayer?

The supposed fiscal burden of refugees (how much they cost the state) is often touted as a reason to rein in refugee resettlement programs. This doesn’t seem to be the case for adult refugees in the United States, according to a new paper released in June by the National Bureau of Economic Research. It shows that adult refugees aged 18-45—the majority of the researchers’ sample—make a net fiscal contribution over their first 20 years in the U.S.

The authors argue that current literature examining social and economic outcomes for refugees “tends to concern very specific populations, uses very small samples, relies on data from a small number of countries with high refugee totals, or focuses on very short-term outcomes.” But this study was different. It tracked a group representative of refugees in general and was based on an extremely large, diverse sample. The NBER Digest explains:

They separated refugees from other immigrants using Department of State data, and created a sample of 20,000 refugees who entered the country in 1990-2014. Their sample represents a third of refugees who arrived during the period.

The initial fiscal impact of refugees was (unsurprisingly) negative due to resettlement costs, low human capital, and high welfare use. However, this was only the case for 8 years after arriving in the country:

Using the NBER’s TAXSIM model, the study estimates that “refugees pay $21,000 more in taxes than they receive in benefits over their first 20 years in the U.S.” This may well be a low estimate of refugees’ positive net fiscal impact:

...we assumed that refugees paid the same amount in sales taxes as they did in state income tax. Data from the Quarterly Summary of State and Local Tax Revenues, between quarter 1 of 2010 and quarter 4 of 2014, indicates that revenues from state income tax and sales tax have been essentially the same over this period, with only a 2% aggregate difference. This most likely understates the amount of sales tax paid by refugees, as it is a regressive tax.

The authors also found that many child refugees enjoy positive educational outcomes, although older teenage refugees tended to fare worse:

Among young adults, we show that refugees that enter the U.S. before age 14 graduate high school and enter college at the same rate as natives. Refugees that enter as older teenagers have lower attainment with much of the difference attributable to language barriers and because many in this group are not accompanied by a parent to the U.S.

What does this new evidence mean for the UK’s approach to refugee resettlement? Firstly, it shows the importance of conducting more research into the net fiscal impact of refugees arriving in the UK; data on this topic is remarkably hard to find. The closest thing we have is estimates of the net fiscal impact of general immigration flows, and these estimates tend to be static rather than employing the NBER study’s dynamic approach.

Some evidence from Australia does suggest a negative fiscal impact of refugee immigration; although refugees became net contributors after 10-15 years, they were net drains on public finances over the course of a full 20-year period. However, it’s vital to view refugees’ fiscal impact in comparison to that of natives; if a country is running a budget deficit, the average natives will also have a negative net fiscal impact that may be similar in magnitude to the average refugee.

Any discussion of fiscal impacts must also include potential for positive effects on natives not captured by narrow measures of fiscal impact. My colleague Sam Bowman has previously referenced an innovative paper on Denmark’s experience with refugees:

Mette Foged and Giovanni Peri looked at refugee influxes from Yugoslavia, Somalia, Iraq and Afghanistan to Denmark between 1985 and 1998.

These refugees were distributed evenly across the country’s municipalities without any regard to labour market conditions. This counts as an ‘exogenous shock’ a new influx of refugees to the UK would.

Forty to fifty percent of these immigrants had only secondary school education or lower and “were in large part concentrated in manual-intensive occupations”. By allowing for a deeper division of labour, the “refugee-country immigrants spurred significant occupational mobility and increased specialisation into complex jobs, using more intensively analytical and communication skills and less intensively manual skills.” That meant that native workers who might otherwise have done low-skilled jobs were able to move into more specialised, productive, highly-paid work.

These considerations aside, there are various external factors that could hamper the ability of refugees to make a positive contribution to public finances. Compared to the United States, many European countries have notoriously inflexible markets and generous welfare states, posing a dilemma for progressive supporters of immigration. As IMF analysts have put it, negative fiscal impact could also partly reflect “the existence of legal obstacles preventing refugees from starting to work quickly upon arrival.” There are sensible ways to maximize the benefits of refugee influxes, such as ‘keyhole solutions’ and private refugee sponsorship schemes.

Another reason not to believe the Bank of England’s stress tests

This posting is the third in a series on the 2016 Bank of England stress tests. A fuller report, “No Stress III: the Flaws in the Bank of England’s 2016 Stress Tests”, will be published later in the year by the Adam Smith Institute. 

The previous posting is here.

The Bank of England repeatedly reassures us that its stress tests demonstrate the resilience of the UK banking system.

Well, let’s put the stress tests to a stress test.

We have the performance measure, the leverage ratio at the peak of the stress scenario[1] and we have the pass standard. A bank passes the stress test if its leverage ratio at the peak of the stress is at least as high as the pass standard, and it fails the test if the leverage ratio at the peak of the stress falls short of the pass standard.

Let’s consider the five biggest banks: Barclays, HSBC, Lloyds, RBS and Standard Chartered.

In its 2016 stress tests, the BoE used the ratio of Tier 1 capital to leverage exposure as its leverage ratio. The Bank refers to this leverage ratio as the ‘Tier 1 leverage ratio’. The leverage exposure is a measure of the amount at risk and will be of a similar order of magnitude to, and for UK banks will typically be a little smaller than, total assets.

Across the big five, the average Tier 1 leverage ratio at the peak of the stress was 3.95 percent.

The pass standard used in the test was based on Basel III rules and was 3 percent.

By this test, the UK banking system looks to be in reasonable shape and only RBS failed to meet the 3 percent pass standard.

It would, however, be premature to get the champagne out just yet.

On July 8th this year I wrote to Governor Carney about the stress tests and one question I put to him was “How does the Bank justify the 3% Tier 1 minimum required leverage ratio?”

On August 3rd the Bank’s Executive Director for Financial Stability Strategy and Risk, Alex Brazier, wrote back to me with the following answer:

Our minimum leverage requirement for the major UK banks is now 3.25% of assets excluding central bank reserves. … But this is a minimum. On top of that the systemic and countercyclical leverage ratio buffers will, once phased in, add around 0.75% to the average leverage requirement of the largest UK banks.[2] Furthermore, to pass stress tests, firms typically need to hold a buffer of around 1 percentage point on top of this. (My italics)

I am grateful to Mr. Brazier for the clarification, which I interpret as an authoritative statement that the largest UK banks will typically face a minimum required leverage ratio of around 5 percent once the new buffers are phased in.

I am however puzzled why the Bank did not use this higher minimum required leverage ratio as the pass standard in its stress tests. After all, what is the point of the Bank using a 3 percent pass standard in the stress tests whilst simultaneously arguing that the actual minimum required leverage ratio is, or will be, considerably higher than 3 percent? The reason this is a problem is that it opens up the incongruous possibility that a bank might be deemed to pass the stress test whilst simultaneously failing to meet the minimum required leverage ratio.

I am even more puzzled when Mr. Brazier writes that the banks need to meet these higher standards in order to pass the stress tests. Whatever is one to say when the Bank of England official in charge of the stress tests maintains that to pass the stress tests the banks must meet a higher pass standard than the pass standard used in the stress tests?

So the question then arises: how would UK banks have performed in the stress test had the BoE used a minimum required leverage ratio of around 5 percent as its pass standard, instead of the 3 percent pass standard that it did use?

Recall that across the five big banks, the average Tier 1 leverage ratio at the peak of the stress was 3.95 percent. Since 3.95 percent is nowhere near close to 5 percent, then it would appear that, taken as a group, the big five UK banks would have failed the stress test.[3]

The “incongruous possibility” mentioned earlier would appear to be a reality:  taken as a group, the big five banks passed the stress tests even though they did not meet minimum regulatory requirements during the projected stress.

In fact, it would appear that they passed the stress tests even though they did not meet the pass standard required to, er, pass the stress tests. 

End Notes

[1] The Bank’s headline capital ratio, the ratio of CET1 capital to Risk-Weighted Assets, is not considered here because the denominator is deeply flawed to the point of being discredited. See, e.g., K. Dowd, Math Gone Mad: Regulatory Risk Modeling by the Federal Reserve, Cato Policy Analysis No. 754, Cato Institute, Washington D.C., September 2015 or No Stress II: The Flaws in the Bank of England’s Stress Testing Programme, Adam Smith Institute, London, August 3rd 2016. 

[2] At this point. Mr. Brazier inserted a flag to the following footnote: “See the Governor’s letter to Andrew Tyrie of 5 April 2016 for a fuller explanation of the impact of buffers on leverage requirement available here:”.

[3] When I replace the leverage exposure measure in the denominator of the leverage ratio with total assets, I estimate that the average leverage ratio across the big 5 banks at the peak of the stress would have been in the region of 3.7 percent, a comfortable fail.