bestofasi

The innocence principle

justice-statue.jpg

Like freedom of speech, the presumption of innocence before proof of guilt is something that almost everyone agrees is important in principle, but are occasionally reluctant to apply in practice. In recent weeks we have witnessed some examples of this reluctance that, to me, seem chilling. Eric Garner was an obese African-American who was killed by police officers holding him in a chokehold while they arrested him for illegally selling individual cigarettes in New York City. His last words are here.

Virtually everyone who has seen the video agrees that they acted with an extreme amount of force against a man who was not fighting back although he was resisting arrest (passively – that is, in a way that would not harm the officers).

A Grand Jury found that the police officers who killed Eric Garner did not act unlawfully. I defer to the Grand Jury on this, but assuming they are correct this suggests that the scope for lawful killing by police officers is extremely broad. As law professor Glenn Reynolds (and others) has noted, killings by police are treated much more sympathetically by juries than killings by civilians.

Michael Brown was an African-American teenager who was shot and killed by a police officer during an arrest after he (seemingly) robbed a convenience store in Ferguson, Missouri. There is still some disagreement about what happened here. The initial reports suggested that the officer executed Brown as he fled or begged for his life, but the subsequent Grand Jury investigation seems quite conclusive that Brown assaulted the police officer. The Grand Jury’s conclusions prompted looting by people in Ferguson.

If Brown’s shooting was unjust, the Garner lesson applies. But if the narrative found by the Grand Jury is correct then the protests, lootings and slandering of the police officer involved are wrong. In that case, it is the media’s presumption of guilt on the part of the police officer involved (even after the Grand Jury verdict) that has led to significant destruction and violence. People suspended the innocence principle to advance a political point, and the results have been bleak.

Jackie is a student at the University of Virginia by a Rolling Stone article which alleged that she had been gang-raped by a group of fraternity men. Last week Rolling Stone retracted the story after a number of facts given by Jackie in her story proved to be false.

The aftermath of the Rolling Stone story has been extremely disturbing, with very prominent people proudly dispensing with the innocence principle. The Washington Post ran a piece titled “No matter what Jackie said, we should automatically believe rape claims” (this was later changed to “generally” believe them). The Guardian’s Jessica Valenti wrote that “I choose to believe Jackie. I lose nothing by doing so, even if I’m later proven wrong”, and that “the current frenzy to prove Jackie’s story false – whether because the horror of a violent gang rape is too much to face or because disbelief is the misogynist status quo – will do incredible damage to all rape victims.” [my emphasis]

Has Valenti considered that someone else may lose something if we chooses to believe an accusation that is untrue? Or that we may have other reasons than misogyny or incredulity to want to know if a criminal accusation is false?

Sexual assault is very common, but this does not mean that false accusations do not occur. An estimated 1.5% to 7.5% of accusations may be false. Staggeringly, a 2012 study that used DNA testing of old physical evidence and exonerated between 8% and 15% of convicted rapists.

I know why Valenti is eager to believe Jackie: because not believing a genuine story is horrendous for the victim and makes other rape victims less likely to come forward, and hence makes rape an easier crime to commit. But the inverse is also true: believing a false story is horrendous for the wrongly-accused and makes other false accusations more likely. (The Rolling Stone story did not name individuals, but guilt-by-implication can still be enormously harmful.)

In all of these cases, people who would normally say that the presumption of innocence before proof of guilt is a good thing have assumed the opposite. The rule might work in general, they may say, but this case is an exception. Police need to be able to subdue people resisting arrest. The death of an 18-year old must be unjust. Rape is too serious an allegation to question.

Like the principle of free speech, the innocence principle only produces good results if we apply it rigidly and in cases where doing so may feel deeply unsettling.

The innocence principle matters because people who seem guilty may in fact be innocent. This is why mechanisms like jury trials exist – like the ‘thick’ version of free speech that I argued for recently, they are a mechanism for sorting the truth from lies.

Hayek speculated that liberal institutions like these evolved over time, because the societies that lacked them eventually fell behind the ones that upheld them. Politically and culturally, we may be witnessing an erosion of these institutions now. That would be a catastrophe. But it is not too late to change course.

Why do people oppose immigration?

My Buzzfeed post on immigration generated a bit of traffic yesterday and a bit of disagreement, too. The most common objection to our approach to immigration is that it's one-dimensional—OK, we might be right about the economics, but c'mon, who really cares? It's culture that matters. This point was made to me a few times yesterday and there's definitely something to it. My first response is that I think people underestimate the public's ignorance of the economics, and hence the public's fears about immigration. This poll by Ipsos MORI (I love those guys) asked opponents of immigration what they were worried about—as you can see, their concerns are overwhelmingly about job losses and the like:

The top five concerns are all basically to do with economics, with the highest-ranking cultural/social concern getting a measly 4%.

Obviously this isn't the whole story. People might be lying to avoid seeming "racist", for example. But in other polls people seem less reserved—last year 27% of young people surveyed said that they don't trust MuslimsLess than 73% of the population say they'd be quite or totally comfortable with someone of another race becoming Prime Minister, and less than 71% say they'd be quite or totally comfortable with their child marrying someone of a different race. So the 'embarrassment effect' of seeming a bit racist can't be that strong, and clearly the ceiling is higher than 4%.

I reckon it's more likely that people have a bunch of concerns, of which the economic ones seem more salient. Once they've mentioned them, they don't need to add the cultural concerns to the pile. Either that, or we just believe people in the absence of evidence to the contrary.

That's why I think it's legitimate to focus on the economics of immigration, even if we concede that the cultural questions are important (and tougher for open borders advocates to answer). Persuading some people that their economic fears are misguided should move the average opinion in the direction of looser controls on the borders.

If we could put the economic arguments to bed we might be able to have a more productive discussion about immigration. If culture's your problem, then let's talk about that, but remember that the controls we put on immigrants to protect British culture come with a price tag. Maybe we'd decide that more immigration was culturally manageable if we ditched ideas like multiculturalism and fostered stronger social norms that pressurised immigrants into assimilating into their new country's culture. I don't know. (Let's leave aside my libertarian dislike of using the state to try to shape national culture.)

The point, for me, is this: the economics of immigration does matter a lot to people. Immigration is not either/or—we can take steps towards more open borders without having totally open borders. At the margin, then, persuading people about the economics of immigration should move us in the direction of more open borders. And that, in my view, makes the world a better place.

Nominal GDP targeting for dummies

homeheroslide2.png

Nominal Gross Domestic Product (GDP) targeting is a type of monetary policy that people like me think would give us a more stable economy than we currently have. It would replace the Bank of England’s current monetary policy, inflation targeting. Nominal GDP can be understood as sum of all spending in the economy. Total spending can increase either because of price rises (inflation) or because there’s more stuff to go around (economic growth). If this year inflation is 2% and we have 2% economic growth, nominal spending (nominal GDP) will have risen by 4%.

The current policy of inflation targeting means that the Bank of England tries to control the money supply so that prices rise, on average, by 2% every year. If prices rise by more or less than this, the Bank is judged to have failed in its job.

Nominal GDP targeting would mean that the Bank of England would stop trying to target price rises, and instead try to target the total amount of nominal spending that takes place in the economy. That means that if economic growth was lower than usual, the Bank would have to try to make inflation higher than usual. If economic growth was higher than usual, inflation would be lower than usual.

This system is appealing because it is often the total amount of spending in the economy that matters, rather than inflation per se. Wages are usually set in nominal terms, which means that they do not automatically adjust upwards and downwards according to inflation.

Because of this, a drop in the amount of spending going on can lead to a mismatch between all the wage demands in the economy and the amount of money available to pay them. In other words, there is not enough money in the economy to pay everyone. This has two possible outcomes: either wages can be cut to meet the new level of spending, or people will have to be fired.

Empirically, it seems as if firms prefer to fire some workers than to cut wages across the board. In fact, firms really hate cutting wages, for some reason, and unemployed people are often reluctant to take the same job that they once had for a lower wage. Economists refer to this phenomenon as “sticky wages”.

So the outcome of a fall in total spending is usually unemployment. This is an example of a nominal change having a real effect, and destroys wealth that need not be destroyed, because the previously-profitable relationship between the worker and the firm has now been undone.

When this happens across the economy it can affect economic growth. In fact, this seems to be a very important factor in recessions – when there is a steady level spending taking place, the market is pretty good at finding new ways of using unemployed workers fairly quickly. When there just isn’t enough spending going on, we have to wait for workers and firms to cut wages enough to hire them again, which can take a long time.

Under nominal GDP targeting, the Bank of England would commit to keep the spending level growing even if economic growth dipped. As I've said, that would mean more inflation in times of slow growth and less inflation in times of quick growth.

Because inflation is being used to offset the changes in economic growth, negative economic ‘shocks’ like oil crises will translate into higher prices, prompting the market to adjust to take account of new realities, but never creating the domino effect of mass unemployment that we sometimes currently experience. The real economy would still adjust to real shifts in supply and demand, but we’d avoid the chaos that unstable monetary environments can create.

The key is that almost all contracts in the modern economy are set in nominal terms. That means that money that is managed in the wrong way can create a lot of unnecessary destruction of wealth. Nominal GDP targeting would probably give us the most neutral monetary system possible with the government, with the monetary environment kept stable so the real economy can do its work in allocating resources.

Money matters. The 2008 crisis happened because expectations of inflation, and hence nominal spending levels, dropped sharply, causing the ‘musical chairs’ problem of too little money to fulfil all the existing contracts and wage demands, which led to widespread bankruptcies and job losses. Today, the UK and the US have begun to get their spending levels growing at a healthy rate again, and their real economies have begun to grow healthily again too.

The Eurozone is the saddest story. The European Central Bank has been obsessed with fighting inflation (possibly because Germany has not suffered much, and Germans have bad memories of hyperinflation during the 1920s), and as a result nominal spending has grown very slowly indeed. The consequences are easy to see: in the weaker European economies, like Greece, Spain and Italy, unemployment is at historically high levels. It seems likely to stay there for many years.

Many people, myself included, believe that a system where private banks could issue their own notes without a central bank at all would be the best system. This is known as ‘free banking’. One of the best arguments for free banking is that it would keep nominal spending levels steady, because banks would issue more notes during periods of slow growth and fewer notes during periods of high growth. This should sound familiar – nominal GDP targeting is probably the closest we can get to ‘stateless’ money while having a central bank.

Nominal GDP targeting would not prevent all recessions or guarantee growth. The real economy is what determines things like that. But badly-managed money can destroy growth, create recessions by itself, and turn small ‘real’ recessions into extremely bad depressions, as happened in the 1930s and 2000s. Nominal GDP targeting would give us stable, neutral money that avoids these things. We would have been better off with it in 2008, and we would be better off with it today.

Don't kill off the only industry that provides loans for low-earners

Wonga’s decision to write off £220m worth of debt for 330,000 customers and “voluntarily” embrace new regulations will been seen by many as a form of social justice and an obvious defeat for the big, bad, payday-lending wolf. Unfortunately, the Financial Conduct Authority’s attempt to further regulate the payday lending sector may end up harming low-income earners in need of a loan.

But first, we must distinguish between the payday lending industry and Wonga as a specific organization within that industry. Payday lenders offer customers quick and easy access to short-term cash flow. Though anyone with any income size could apply to Wonga for a loan, it is mostly used by people with low-incomes, as such earners struggle to get bank loans and credit cards, and payday loans are often cheaper than using an unauthorized overdraft.

Of course, there are risks associated with payday lending, as “companies are loaning to high-risk demographics, with usually low-income averages and bad credit scores."* In order to stay profitable and protect themselves from bankruptcy, payday lending companies must factor defaults into their interest rates.

These interest rates –especially Wonga’s interest rates – tend to be the target of myths constructed by opponents of payday lending, who are either accidentally or intentionally analyzing the data badly. Most notably, critics attack Wonga for charging its customers close to an astronomical 6,000% interest rate.

That figure, however, comes from a legal quirk in British financial regulations that requires every business to express their interest rates as an annual rate. Wonga’s payday loan interest payments are capped at sixty days, so there is no scenario where anyone could come close to paying Wonga nearly 6,000% APR, as the company is forced to express as it’s annual rate.

Some of the criticisms leveled specifically at Wonga do have merit – indeed, their fake legal letter scandal from this past summer - which threatened customers with legal action if loans weren’t repaid - left everyone feeling uncomfortable with the industry.

Such behavior from any company is unethical, to say the least, and should be met with repercussions. But the FCA’s decision to crackdown on all payday lenders as a result of Wonga's actions will drive almost all payday lenders out of business and leave Wonga to dominate the industry.

From today it has introduced new lending criteria to improve its decisions. That means it will be lending to fewer people and it is unlikely to be the only firm forced to do that, as the FCA said today: "This should put the rest of the industry on notice.

This new lending criteria, coupled with previous regulation tightening – bans on payday advertising in public spaces – and future proposed regulations – like a mandatory cap on costs for all short-term loans – reduces the entire industry’s profitability and forces smaller companies, that would otherwise compete with Wonga, out of the market.

Furthermore, other indirect financial regulations continue to ensure Wonga’s dominance in the loan market. Credit unions could become competitive payday lenders and compete with companies like Wonga, but their interest cap of 3% a month prevents them from properly competing in the market.

Yes, Wonga is facing a 53% fall in annual profits partly as a result of new controls set by the FCA, but other payday lender companies, that don’t have the ethically questionable history of Wonga, are looking to be cut out of the market all together.

Critics of payday loans will be overjoyed to hear that the payday lending industry is on the rocks, but those who actually use its services and benefit from the loans should be worried. Banks and credit card companies have priced these customers out of accessing loans, and with with less payday lenders offering their services to people with low incomes, a lot of people will find themselves with no options, no loan, and no way to pay rent.

While payday lenders are by no means the perfect system to deliver loans to low-income customers, they are currently the only realistic way for such people to get their hands on necessary loans.

*This gal.

Two cheers for technocracy

2065569-macro-shoot-of-one-dollar-pyramid-and-all-seeing-eye.jpg

Who needs experts? The minimum wage was once an example of the triumph of technocracy, where decisions are delegated to experts to depoliticise them. The Low Pay Commission was set up to balance competing priorities – increasing wages without creating too much unemployment. If you were a moderate who thought the minimum wage was a good way of boosting low wages, but recognised that it might also create unemployment, the LPC gave you a middle ground position. (For what it’s worth, I’m an extremist.)

That technocratic settlement also allowed politicians to, basically, safeguard against an ignorant public. By delegating decisions like this to experts, bad but politically popular policies could be avoided. Relatively well-informed politicians could avoid having to propose bad policies by depoliticising them.

Other examples of this include NICE’s responsibility for deciding which drugs the NHS should and shouldn’t provide, and the Browne Review that recommended student fees, which had cross-bench support. The old idea that “you can’t talk about immigration” comes from an informal version of this – everyone in power knew that people’s fears about the economics of immigration were bogus, so they were basically ignored.

But that technocratic settlement now looks dead. Labour has now made a specified increase to the minimum wage part of its electoral platform, following George Osborne’s lead earlier this year. That means that voters will have to choose not just between two rival theories about the minimum wage, but two competing sets of evidence about whether £7/hour or £8/hour is better, given a wage/unemployment trade-off.

Whether voters are self-interested or altruistic doesn’t really matter. A self-interested low wage worker would still need to know if a minimum wage increase would threaten her job; an altruistic voter would similarly need to know a lot about the economics of the minimum wage and the UK’s labour market to make a judgement about what level it should be.

And of course the minimum wage is just one of dozens, if not hundreds, of questions that political parties offer different answers to that voters have to make a judgement about.

In practice this does not happen. Voters are very uninformed about basic facts of politics, and are almost entirely ignorant about economics, which almost everyone would agree would be necessary to make the correct judgement about something like what the minimum wage level should be (even if they didn’t agree on which theories and evidence was relevant). Even the use of rules-of-thumb such as listening to a particular newspaper or think tank (ha) will suffer from the same problems.

Voters, then, face a nearly impossible task. Assuming they are bright, well-intentioned, and believed that it was important for them to cast their vote for the party that would have the best policies, they would have to amass an enormous amount of information to make the right decision on all the questions they, in voting, have to answer.

So voters are trapped. They cannot know what minimum wage rate is best any more than they can know what drugs the NHS should pay for. They are, empirically, very unaware of basic facts, but they would find it hard to overcome that even if they wanted to.

Does democracy make us free? Maybe, but it’s the freedom of a deaf-blind man – we can choose whatever policy we want, without any idea about what those policies will actually do. So, if the alternative is more direct democracy like this, maybe technocracy isn’t so bad.

Switching mobile networks is easier than switching governments

IMAG0421.jpg

Unlike lots of people on the right, I like Owen Jones. He’s good natured and often challenges orthodoxy on his own side, and he’s a thought-provoking writer. 

Having said that, I usually disagree with what he writes on economics. His Guardian piece this week, which called for the nationalisation of the UK's mobile network operators, was a good example. It’s tempting to dismiss it as clickbait, but it represents a train of thought that is increasing in popularity. And if nothing else it may shift the Overton Window.

Jones starts by pointing out that nationalisation of big industries is very popular among the public at large. “While our political overlords are besotted with Milton Friedman, the public seem to be lodged somewhere between John Maynard Keynes and Karl Marx.” 

A fair point. He might also have noted that the public disagrees with him about lots of other things: the obvious example is hanging, where the public is somewhere between Roger Helmer and Oswald Mosley, but there’s also immigration, which 55% of people want reduced ‘a lot’ (and another 21% want reduced ‘a little’). The Great British public thinks the benefits system is too generous by a 2-to-1 margin, and think that ‘politicians need to do more to reduce the amount of money paid out in benefits’ by a 3-to-1 margin. And so on. On these issues, and presumably many others, I assume Jones thinks the public needs further persuasion.

It isn’t necessarily that the public really is bloodthirsty or xenophobic or anti-poor or quasi-Marxist; it’s that the public is extremely uninformed about most things. How could you judge whether we needed more or less immigration if you thought we had more than twice as much immigration as we actually do? How could you judge whether the railroads should be nationalised or not if you did not know that passenger numbers had doubled since privatization, after decades of decline under the state?

Jones claims that mobile phone networks are an inefficient natural monopoly, without any real reasons given. This claim is untrue. The UK has four competing mobile networks (Vodafone, O2, Three and EE, which was formed by a merger by T-Mobile and Orange) and dozens of aftermarket “mobile virtual network operators” that lease wireless spectrum from those four networks (GiffGaff and Tesco Mobile are two popular examples). None of these networks are unusually profitable and all spend enormous amounts on marketing. Try spending a day in a city without seeing at least one advert for each company. This is not the behaviour of monopolistic industry!

(There are a couple of other frustrating errors in the piece. For instance, a typical £32-a-month 24-month contract can get you an iPhone worth £550, not a device worth £200 as Jones claims.)

Yes, signal blackspots are annoying. (Take it from someone who spent his teenage life having to walk into the garden to send a text message.) And mobile networks’ customer service really does suck sometimes! But Jones is comparing reality with an ideal where resources are infinite. Since resources are not infinite, we have to have some way of deciding what imperfections are tolerable. 

For example, as annoying as blackspots are, the optimal amount of coverage is obviously less than 100%. The phone networks reckon they cover around 99% of the population, and as frustrating as it is when you’re in that last 1%, the marginal costs rise dramatically when you try to cover that last 1%. We could cover them at great cost, meaning that we have less money to spend on other important things elsewhere. The question is one of priorities.

Ultimately, the important question that Jones does not answer (or ask) is, compared to what? Private sector firms might be irritating sometimes. Unless you can show that nationalised firms would be less irritating and better overall, that doesn’t tell us anything about what we should do. 

There are lots of examples of nationalised firms that were absolutely terrible. Tim remembers waiting three months for a landline when the GPO ran the phones; and then there is the huge drop-off in rail passenger numbers under British Rail, followed by an equally huge recovery after privatisation:

The fact that the state funded some of the scientific research that led to the iPhone doesn’t mean that we’d have better phones if we nationalised Apple. (It might be a case for state funding for scientific research that is released into the public domain, though.) As Tim says, “The State can be just as good as the market at invention, the creation of really cool new technologies. But it’s terrible compared to the market at innovation, the getting of that new technology into peoples’ hands so that they can do cool and interesting new things with it.” 

Economies of scale exist, as Jones suggests, but so do diseconomies of scale. Firms can be too big. And when you have a single network (whether it’s privately or publicly owned), customers lose all ability to ‘exit’ a firm that is giving them a bad service, so the only recourse they have is at the ballot box. 

Which brings us back to the first problem with Jones’s piece: politics is a complicated business about which we know little. If we don’t like what we’ve got, we have to hope that a majority of other voters agrees with us – and even if we’re right, they may not be informed enough to agree with us. 

It’s a lot easier to switch mobile phone providers than it is to switch governments. Ultimately, it’s that pluralism and freedom of exit that drives improvements in markets, and tends to make governments relatively bad at doing things. For all the mobile network industry’s problems, the question is: compared to what?

Idiocracy: a review

Last night I watched Mike Judge's 2006 film Idiocracy. I enjoyed it a lot. It is basically a dystopian science fiction film with an element of (slightly dark) comedy, but it hides its fairly extreme pessimism and conservatism behind a half-hearted satire on modern society. It imagines what would happen if the less sharp people in society had substantially more kids than the smarter people—and if this trend carried on for centuries.

By 2505 the population has an average IQ of something like 60, and society is crude, degenerate and decadent. It limps on only because the last of technological advances went into automating most of the functions needed for basic survival. The personal narrative is that two people, a man and a woman both selected for their extreme averageness in all attributes, get cryonically frozen, wake up in 2505, and are the smartest people in the world. Hijinks ensue.

It's a fairly enjoyable film on its own merits, but the really interesting question is whether it says anything about the world we live in. While this sort of thing is extremely uncertain and speculative, arguably it does.

Behavioural genetics tells us, as the film suggests, that (variation in) intelligence is 50-90% driven by genes. And the extent to which intelligence is linked to genes increases through life (suggesting the impact of school & upbringing wears off quickly). This is true for any given socioeconomic class (unless they are exposed to lots of lead pollution, malnutrition or similar big negative environmental shocks). This is true across the world.

Indeed, a large fraction of this seems to come identifiably through specific alleles or single-nucleotide-polymorphisms (chunks of genetic code). We also know that intelligence correlates with brain volume, also genetically-driven. We even know that the way brain volume changes is substantially genetic. By contrast shared environment (includes family upbringing, and under certain conditions also the effect of school) has around zero effect on IQ/intelligence.

And so the question becomes: are sharper people having fewer kids than the less sharp? Evidence seems to say "yes", for the USA, UKTaiwan, and generally across the world. Though, it certiainly has to be said, that on the models here (estimating a loss of about .8 IQ points per generation, or perhaps 3 per century), it would take substantially longer than 500 years to get to the idiocratic society.

But wait a second—what about the Flynn Effect? Haven't measured IQs increased massively over the past hundred years? Aren't we getting smarter? Sadly, the Flynn Effect may not reflect an improvement in intelligence—once you account for the many ways that people have got better at tests, e.g. through learning to guess when they don't know the answer. In fact, the tests with a lower "g loading", i.e. the ones less good at accurately reflecting intelligence, are the ones reporting the biggest Flynn Effects.

This shouldn't be surprising, because random selection into a better school typically has no positive impact on achievement (though it does have strong negative impacts on crime). But even if the Flynn Effect were, say, reflecting greater education making up for lower genetic intelligence, it seems like we've pretty much exhausted its benefits—and it is now going into reverse in the Netherlands, theUK, Finland, and elsewhere. And thus even if phenotypic IQ (i.e. as measured by tests), and not genetic IQ, is "what matters", we can't hope that extra education will make up for duller genetics in the future.

And we have independent reasons to think that genetic IQ might be "what matters". While it is phenotypic IQ that correlates with, e.g. homicides, this relationship is seriously confounded by the fact that medical technology has advanced substantially over the period. Without these advances, homicides would be five times higher (on a static analysis—it's possible that the extra who would have been killed, but survived, would have themselves committed extra homicides).  And it is genetic IQ that appears to be associated with social advances, innovation, science, technology and so on. Were the Victorians smarter than us? Though the linked paper has been criticised, the authors' response to their critics is persuasive enough that we should take the idea seriously.

If this is all true (and certainly that is itself a contested step) should we be worried? This is the most interesting question for me. Thankfully, there are two families of technologies that I see as potentially solving this problem.

The first is artificial wombs. One of the main causes of reduced fertility among the smart, as documented in Idiocracy's excellent opening third, is the cost of having children. This cost is not just in terms of feeding, clothing and looking after them, nor even buying more expensive houses to get into better schools, but also in terms of labour market potential—much of the gender wage gap appears when women take time out to have & raise kids, and a large fraction of the rest may come because women expect to take this time out (and thus invest less in human capital). We know that people really do respond to things that make having kids cheaper. Technologies that drastically reducethe cost to smart women of bearing kids could be one way of arresting the alarming trend.

The second is genetic engineering in all its forms. If we can pick embryos, or even engineer people's DNA, then we could feasibly (after lots more research!) make sure that kids are smart even if their parents are not. While Idiocracy does not mention "my" first solution, it dismisses this one, explaining that the remaining smart people in society spend their precious time perfecting cures for hair loss and impotence.

But I'm much more optimistic than Mike Judge (hopefully rationally!)--I have almost unlimited faith in the ability of human ingenuity to overcome big threats to society. I feel confident we'll either develop the aforementioned technologies, or in their stead something completely different and entirely unpredictable that fixes the issue. And if our best minds do spend their time solving erectile dysfunction—as in Idiocracy—who knows, that might help solve the problem!

UPDATE 30/7/14: Jaymans says this trend may have reversed for the most recent cohort!

UPDATE 28/10/14: A paper newly published in the Economic Journal suggests that the education-fertility curve may have recently become U-shaped for US women.

Inside the Adam Smith Institute

Now that the new Adam Smith website is up, with an exciting plethora of activities and reports scheduled, new readers might like to take stock of what the ASI does, and what motivates us. If labels are used, they might be "free market" and "libertarian," but these are big tents under which disparate people are grouped. The crucial thing is that our free market libertarianism is both consequentialist and empiricist, combining an essentially Hayekian economic outlook with a deep optimism about the world.

In our view actions that enable individuals to advance their happiness by pursuing their own goals are worthy of support, and those that restrict their ability to do that should be opposed. We are more concerned with what results from actions than with the intentions or attitudes of those who initiate those actions. And we are more concerned with changing the world for the better than with promoting theories about it.

As empiricists we make conjectures about the world and its future, and we test their value against experience of real world outcomes. Where the two conflict, it is the conjecture that has to be rejected or modified. We take the view that "an ounce of practice is worth a pound of theory."

While economics and public policy are complex fields that make experiment and testing difficult to perform, we do attempt to test proposals by their results. Several times we have proposed small-scale trials of larger ideas in order to validate the ideas and ascertain any unforeseen drawbacks before they are rolled out more widely.

We recognize, of course, that poor people do not have access to the choices and chances accessible to the rich, and this is why many of our policy initiatives are directed to improving the lot of poorer people in society. We have advocated for many years that the income tax and national insurance thresholds should be set at the level of the minimum wage and indexed to it, so we would not be taxing people on the bottom income level.

Some of our research studies and policy suggestions derive from our recognition that poor people are hurt most by things such as restrictions on international trade and migration, planning controls that prevent cheap housing from being built, education policies that condemn poor children to bad schools and regulatory policies that protect established market players from new entrants.

We propose and back policies that give all parents choice over where their children go to school and which introduce competition into the school system, whether these be by education vouchers, or by allowing the allocation of state funds to schools be determined by the choices parents make. We tend to back the view that welfare is not just about providing the services the state thinks poor people should have, but about equipping people with the means to make their own choices about the mix of services they prefer. Ideas such as a negative income tax could remove the perverse incentives present in the current welfare system.

We recognize that states can cause a great deal of harm when they attempt to direct and micromanage the economy. Many regulations have damaging effects that were not anticipated, and this includes financial regulations that can make financial systems more unstable than they would be without them.

More broadly, we think that the ‘unknown unknowns’ of regulation should lead society to prefer decentralized trial and error to the risk of one big mistake that affects everyone in the same way.

We have argued that the central bank should follow the ‘Hayek rule’ – the stabilization of the level of nominal spending in times of booms and busts along a predictable path. Scott Sumner recently delivered our annual Adam Smith Lecture and explained how the failure of the world’s central banks to do this led to the Great Recession.

In the Adam Smith Institute we have always been very optimistic about technology and society. We see the world becoming increasingly open and tolerant in most (though not all) areas, with technology and entrepreneurship helping to drive that. To us, companies like Uber, Google and Airbnb deserve to be celebrated when they break down barriers to competition and disrupt the existing way of doing things in ways that give consumers a better product for a lower cost. It is this kind of innovative entrepreneurship that moves the world forward and allows today’s luxuries of the very rich to be tomorrow’s household commonplaces.

There is a dark side when new technologies are used by governments to spy on their citizens and control them. If technologies like Bitcoin and other blockchain-based innovations represent a long-term way of evading the worst excesses of government intrusion, they should be defended from government now while they are still in their infancy.

Of course the Institute is not a monolith. It consists of people who sometimes differ, but all of whom are brought together by a desire to give more power and liberty to individuals, so that their regard to their own interest can make them and us richer, freer and happier.

When ignorance trumps incentives

ignorance.png
When something bad happens it is often helpful to think about why it has happened in two ways: did someone have a reason to make it happen, or did it happen by accident? This can also be expressed in a slightly different way: were incentives to blame, or ignorance? Jeffrey Friedman and Wladimir Kraus have made a compelling argument that ignorance explains more about the world than we often realize, using the 2008 financial crisis as an example. This post is an attempt to summarise their argument.

Economists often remind us that incentives matter. Indeed this is sometimes said to be the cornerstone of ‘the economic way of thinking’. Russ Roberts gives the example of death rates on British ships bringing convicts to Australia in the 18th Century – rather than attempting to raise ship captains’ awareness of the badness of letting their passengers die, the government gave captains a bonus for every convict that walked off their ship. This was very effective.

Clearly this way of thinking can be very powerful. It is the foundation of the price system, which is the mechanism that markets use to allocate resources effectively in a world where information is dispersed: if demand for pizza rises, the price of pizza rises, giving cooks and restaurant owners an incentive to sell more pizza. It helps to explain why some people stay on welfare payments for long periods of time: the welfare money they lose when they go into work represents a significant disincentive to work. Or, if you offer something like a bailout to businesses that go bust, you reduce the incentive for them to act prudently to avoid going bust.

This last example is what is known as moral hazard. And it is a popular and compelling explanation for the 2008 financial crisis. Banks expected to be bailed out if they went bust, so they acted more recklessly than they would if they thought they would be on the line for their mistakes.

However, Friedman and Kraus argue that this popular and compelling explanation may in fact be wrong. A good way of testing it would be to compare how the bankers involved in making bad decisions acted where perverse incentives applied, and how they acted where perverse incentives did not apply.

One strong piece of evidence against the incentives narrative is that bankers seem to have acted the same way with their personal investments as they did with their business investments.

Many bankers lost a lot of money personally in the crisis because their personal portfolios were not ‘bailed out’ in the same way that their banks were. If we are to treat the ‘incentives story’ as a falsifiable proposition (as all claims about the world should be treated), this might be a fairly strong reason to disregard it.

This may be where ignorance comes in. If bankers acted the way they did because they were unaware of the risks they were taking, then we would expect their private and business investments to be pretty similar.

However, it is strange that so many bankers seemed to make the same mistake. We know that they were not acting in a neutral environment: as Friedman and Kraus have shown, regulations like the Basel accords and the US’s recourse rule directed banks to prefer mortgage debt to business debt. Other regulations directed banks to rely on the risk judgments of three specific ratings agencies, giving those agencies protection from competition.

(On the ratings agencies point, astonishingly, it seems that nobody realized that these agencies were basically protected from competition. Both bankers and regulators assumed they were being subjected to market forces, leading to everyone trusting them a lot more than they would if they knew they were dealing with protected monopolies.)

These regulations were designed to make banks act prudently: the regulators had no incentive to make banks act badly. It seems possible that they did not realize the error of their ways until it was too late. Perhaps regulatory ignorance was to blame.

It is important to stress that the regulators should not be blamed personally. They probably made the best choice they could have made given the information available to them. Rather it is the position they found themselves in that seems to have been to blame. If a single bank (or even a handful) makes a mistake, that bank will suffer but the whole sector probably won’t. It is only when a whole sector of a market (or almost all that market) makes an error that we should worry. (Incidentally, as shaky as the housing and financial sectors were, the real trouble did not begin until monetary policy tightened unexpectedly, as Scott Sumner outlined at our recent Adam Smith Lecture.)

Given ignorance, we should expect errors to take place. Because regulation necessarily applies to everyone in a market, a regulatory error affects everyone.  That may be the fundamental problem with regulation, and a reason to have a strong ‘prima facie’ objection to regulation. It is better to have one hundred firms making one hundred different mistakes that happen at different times and in different ways to one hundred firms making one single mistake that happens at the same time for everyone.

None of this implies any special knowledge on the part of firms. Indeed regulators may be much more expert than the firms they are regulating, but the danger of a collective error would still give us a reason to generally object to regulation in principle, no matter how sensible it may seem.