We do rather love this debate over AI and regulation

We also rather love it when non-economists, but people expert in other fields, try to tell us about matters outside their own area of expertise and inside economics. And here we have Mark Buchanan, a physicist, and a good one to boot, who would tell us about the economic and regulatory impact of Artificial Intelligence. Stepping off one's area of expertise is a dangerous thing:

Humanity has a method for trying to prevent new technologies from getting out of hand: explore the possible negative consequences, involving all parties affected, and come to some agreement on ways to mitigate them.

Well, no, humanity doesn't do that and never has done. In that universe where things are planned, possibly, but that isn't the one we inhabit nor have we ever done. No one did say that the Spinning Jenny was going  to free up women from that household labour so they should be paying the inventor. Many were aware that being in charge of a half tonne of metal while intoxicated could be a problem but it was 1925 before the previous laws about steam engines were extended to cars. It was 1934 before even the most basic compotentcy tests were applied to those who could drive even sober.

We don't, and never have, sat down and argued out the costs and benefits of a new technology. What we have done instead is those technologies which have spread, seem useful, ponder on whether they need some regulation, after that popularity and general usage is established.

And of course there can be no other way in a market economy. We do not wish ethicists, philosophers, bootleggers or bandits, politicians or bureaucrats to tell us what we may try. Rather, we want to be able to try everything and only if actual harm to others is proven then perhaps ameliorate this.

People use laws, social norms and international agreements to reap the benefits of technology while minimizing undesirable things like environmental damage. In aiming to find such rules of behavior, we often take inspiration from what game theorists call a Nash equilibrium, named after the mathematician and economist John Nash. In game theory, a Nash equilibrium is a set of strategies that, once discovered by a set of players, provides a stable fixed point at which no one has an incentive to depart from their current strategy.

Sure, Nash is great, and far brighter than you or we, probably more so than us all collectively. But that's still not what we do:

But what if technology becomes so complex and starts evolving so rapidly that humans can’t imagine the consequences of some new action? This is the question that a pair of scientists -- Dimitri Kusnezov of the National Nuclear Security Administration and Wendell Jones, recently retired from Sandia National Labs -- explore in a recent paper. Their unsettling conclusion: The concept of strategic equilibrium as an organizing principle may be nearly obsolete.

But we never have done and hopefully never will do. That market is the process of exploration. So we never do say "What do we do if?" rather, we say "We've found that people like this!" and then consider if anyone has been hurt, are their public goods from it, externalities?

Or as we should put it, sure, many things need regulation, many things don't. Nash Equilibriums should be found, most certainly. But this is something we do after the deployment of a technology, not before. For if we have to have this discussion first then what new technology would ever be deployed?

This error is what people mean by the precautionary principle of course, and it's why it's wrong.