Thursday

May 2nd, 2024

Insight

Second-guess AI 'experts'

Tyler Cowen

By Tyler Cowen Bloomberg View

Published March 14, 2023

Second-guess AI 'experts'
One of the lasting consequences of the COVID-19 pandemic has been a decline of trust in public-health experts and institutions. It is not hard to see why: America botched COVID testing, kept the schools closed for far too long, failed to vaccinate enough people quickly enough, and inflicted far more economic damage than was necessary — and through all this, public-health experts often had the dominant voice.

In their defense, public-health officials are trained to prioritize public safety above all else. And to their credit, many now recognize that any response to a public-health crisis needs to consider the tradeoffs inherent in any intervention. As Dr. Anthony Fauci recently told the New York Times, "I'm not an economist."

As it happens, I am. And my fear is that we are about to make the same mistake again — that is, trusting the wrong experts — with artificial intelligence.

Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Last week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too.

What I do not hear, however, is a more systematic cost-benefit analysis of AI progress. Such an analysis would have to consider how AI might fend off other existential risks — deflecting that incoming asteroid, for example, or developing better remedies against climate change — or how AI could cure cancer or otherwise improve our health. And these analyses often fail to take into account the risks to America and the world if we pause AI development.

I also do not hear much engagement with the economic arguments that, while labor market transitions are costly, freeing up labor has been one of the major modes of material progress throughout history. The US economy has a remarkable degree of automation already, not just from AI, and currently stands at full employment. If need be, the government could extend social protections to workers in transition rather than halt labor-saving innovations.

Each of these topics is so complicated that there are no simple answers (even if we ask an AI!). Still, within that complexity lies a lesson: True expertise on the broader implications of AI does not lie with the AI experts themselves. If anything, Hinton's remarks about AI's impact on unemployment — "it takes away the drudge work," he said, and "might take away more than that" — make me downgrade his judgment.

SIGN UP FOR THE DAILY JWR UPDATE. IT'S FREE. Just click here.

Yet Hinton is acknowledged to be the most important figure behind recent development in AI neural nets, and he has won the equivalent of a Nobel Prize in his field. And he is now doubting whether he should have done his research at all. Who am I to question his conclusions?

To be clear, I am not casting doubt on either his intentions or his expertise. But I would ask a different question: Who, today, is an expert in modeling how different AI systems will interact with each other to create checks and balances, much as decentralized human institutions do? These analyses are not very far along, much less tested against data. They would require an advanced understanding of the social sciences and political science, not just AI and computer science, and it is not obvious who exactly is capable of pulling off such a synthesis — especially in an era of hyper-specialists.

It almost goes without saying that there are different kinds of expertise. National security specialists, for example, confront dangerous risks to America all the time, and they have to develop a synthetic understanding of how to respond. How many of them have resigned from the establishment to become AI Cassandras? I haven't seen a flood of protests, and these are people who have studied how destructive actions can amplify through a broader social and economic order. Perhaps they are used to the idea that serious risks are always with us.

Albert Einstein helped to create the framework for mobilizing nuclear energy, and in 1939 he wrote President Franklin Roosevelt urging him to build nuclear weapons. He later famously recanted, saying in 1954 that the world would be better off without them. He may yet be proved right, but so far most Americans see the tradeoffs as acceptable, in part because they have created an era of US hegemony and ensured that US leaders cannot easily escape the costs of major wars. Nuclear disarmament still exists as a movement, but it is has the support of no major political party it in any nuclear nation. (If anything, Ukraine regrets having given up its nuclear weapons.)

The lesson is clear: Experts from other fields often turn out to be more correct than experts in the "relevant" (quotes intentional) field — with the qualification, as the Einsteins of 1939 and 1954 show, that all such judgments are provisional.

When it comes to AI, as with many issues, people's views are usually based on their priors, if only because they have nowhere else to turn. So I will declare mine: decentralized social systems are fairly robust; the world has survived some major technological upheavals in the past; national rivalries will always be with us (thus the need to outrace China); and intellectuals can too easily talk themselves into pending doom.

All of this leads me to the belief that the best way to create safety is by building and addressing problems along the way, sometimes even in a hurried fashion, rather than by having abstract discussions on the internet.

So I am relatively sympathetic to AI progress. I am skeptical of arguments that, if applied consistently, also would have hobbled the development of the printing press or electricity.

I also believe that intelligence is by no means the dominant factor in social affairs, and that it is multidimensional to an extreme. So even very impressive AIs probably will not possess all the requisite skills for destroying or enslaving us. We also tend to anthropomorphize non-sentient entities and to attribute hostile intent where none is present.

Many AI critics, unsurprisingly, don't share my priors. They see coordination across future AIs as relatively simple; risk-aversion and fragility as paramount; and potentially competing intelligences as dangerous to humans. They deemphasize competition among nations, such as with China, and they have a more positive view of what AI regulation might accomplish. Some are extreme rationalists, valuing the idea of pure intelligence, and thus they see the future of AI as more threatening than I do.

So who exactly are the experts in debating which set of priors are more realistic or useful? The question isn't quite answerable, I admit, but neither is it irrelevant. Because the AI debate, when it comes down to it, is still largely about priors. At least when economists debate the effects of the minimum wage, we sling around broadly commensurable models and empirical studies. The AI debates are nowhere close to this level of rigor.

No matter how the debates proceed, however, there is no way around the genuine moral dilemma that Hinton has identified. Let's say you contributed to a technological or social advance that had major implications, and a benefit-to-cost ratio of 3 to 1. The net gain would be very high, but so would the (gross) costs. And those costs would be imposed because of your labor.

How easily would you sleep knowing that your work, of which you had long been justifiably proud, was leading to so many cyberattacks and job losses and suffering? Would seeing the offsetting gains make you feel better? What if the ratio of benefit to cost were 10 to 1? How about 1.2 to 1?

There are no objective answers to these deeply normative questions. How you respond probably depends on your personality type. But the question of how you feel about your work is not the same as how it affects society and the economy. Progress shouldn't feel like working in the triage ward, but sometimes it does.

(COMMENT, BELOW)

Cowen is a Bloomberg View columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. His books include "The Complacent Class: The Self-Defeating Quest for the American Dream."

Previously:
03/14/23 Governments should compete for residents, not businesses
02/22/23 Economists finally have a good excuse for being wrong A land tax won't make cities more affordable
01/26/23 Economists finally have a good excuse for being wrong
01/24/23 AI is improving faster than most humans realize
12/27/22 Beware the dangers of crypto regulation
12/27/22 Americans have found their happy place
12/14/22 The real risk of higher inflation is lower wages
12/07/22 Fight poverty, not income inequality
10/10/22 A crisis is coming in Europe. The only question is, which kind?
09/06/22 What is the purpose of public policy?
08/15/22 The future of travel is less exotic
08/01/22 Welcome to the era of antisocial media
07/25/22 Biden's COVID diagnosis is a wake-up call for America
05/12/22 A nuclear strike might not prompt the reaction you expect
03/22/22 Doomscrolling has ruined our sense of time
01/22/22 Wokeism has peaked
01/31/22 The latest bias to worry about
01/17/22 America's loneliness epidemic
01/07/22 Some of America's top universities just revealed they're not morally serious
12/29/21 America would be more happy with more people
12/10/21 Bill Gates, Jeff Bezos, Elon Musk . . . and Paul McCartney
12/08/21 The only two pieces of advice you'll ever need
11/29/21 Nuclear fusion is close enough to start dreaming
10/27/21 America's national mood disorder
06/10/21 Lifting of mask mandates poses a challenge for Libertarians
05/28/21 Why economics is failing us
04/19/21We need green energy. We don't need green jobs
04/14/21 Libertarianism isn't dead. It's just reinventing itself
04/05/21 What does the world need? More humans
02/10/21 If Biden goes big now, he may have to go small later
01/12/21 Covid improved how the world does science
12/07/20 How to make sure your complaint is heard
10/27/20 It's getting better and worse at the same time
09/14/20 How to be happy during a pandemic
09/04/20 Trump is winning the vaccine debate with public health experts
07/01/20 Why Americans are having an emotional reaction to masks
05/20/20 Covid-19 will expose the ghosts in the U.S. economy
05/07/20 Are aliens visiting us? US military seems to think so
05/06/20 America's reopening will depend on one thing --- trust
04/22/20 How the covid-19 recession is like World War II
04/15/20 America is returning to 1781
04/08/20 Covid-19 is is upending everything for status seekers
03/17/20 The coronavirus will usher in a new era of entertainment
01/28/20 Social Security isn't doomed for younger generations
01/08/20 Why 2020 is harder to predict than 2019 was
12/02/19 Equality is a mediocre goal so aim for progress
11/25/19 Inflation inequality creates winners and losers
11/09/19 OK kids. This boomer has had enough
10/20/19 Would you bet against Trump in 2020?
09/25/19 The right industrial policy for America
09/24/19 Harvard's legacies are nothing to be proud of
09/02/19 Yes, the Fed could still stop a recession
08/20/19 A trade deal with China wouldn't change much
07/29/19 How your personality traits affect your paycheck
07/16/19 Internet 101 should be a required class
05/28/19 How Dems actually are the ANTI-immigrant party
04/23/19 Want to help fight climate change? Have more children
03/22/19 America isn't as divided as it looks
03/12/19 The Twitter takeover of politics: You ain't seen nothing yet
03/04/19 How to tell which Dem dreams won't come true
02/07/19: Now the Dems want to end America's nuclear first strike option. How clueless is that?
01/29/19: The shutdown hit a lot of government workers --- hard. But, ultimately, who is responsible for their unfortunate circumstances?
12/12/18: The West is abusing its legal power to punish people or institutions that do things it doesn't like. It better stop
10/23/18: The US needs Saudi Arabia, and vice versa
10/19/18: The right finds the perfect weapon against the left
07/24/18: The drive for the perfect child gets a little scary
06/04/18: Side effects of the decline of men in labor market
05/14/18: Proving Marx's theories right
05/08/18: Holding up a mirror to intellectuals of the left
05/01/18: Virtual reality will make lives better ... mostly
04/16/18: It's hard to burst your political filter bubbleIt's hard to burst your political filter bubble
04/09/18: The missing key to grasping why American politics seems to have become more polarized, with no apparent end in sight
04/05/18: Two American power centers are about to clash
03/22/18: We fear what we can't control about Uber and Facebook
03/08/18: How to stop the licen$ing insanity
01/10/18: Polarized Congress needs to bring back earmarks
12/27/17: The year when the Internet collides with reality
11/07/17: Would you blame the phone for Russian interference?
10/23/17: North Korea is playing a longer game than the US
10/12/17: Why conservatives should celebrate Thaler's Nobel
08/02/17: Too many of today's innovations are focused on solving problems rather than creating something new

Columnists

Toons