Friday

April 19th, 2024

Insight

When will machine-like thinking prompt moral panic?

Jonah Goldberg

By Jonah Goldberg

Published April 27,2018

In Frank Herbert's "Dune" series (my favorite-science fiction books), he made a bold writerly decision. In a genre famous for robots and computers (particularly in the 1960s), Herbert imagined a futuristic universe with neither. In his telling, some 10,000 years prior to the story of the book, there was galactic revolt called the Butlerian Jihad. This is where I first learned the word "Jihad" -- the Arabic term for Islamic holy war.

It can all get fairly nerdy, but the gist is that artificially intelligent computers and androids were banned. In one explanation, the Butlerian Jihad was named after a woman, Jehanne Butler, whose baby had been aborted without her permission because an artificially intelligent computer deemed the child unworthy of life. The resulting outrage led to a mass revolt, the banning of thinking machines and a new religious commandment: "Thou shalt not make a machine in the likeness of a human mind."

This idea has always stuck with me because of the fresh venues it opened in the genre and also for political and sociological reasons.

The phrase "moral panic" is almost always used derisively, to suggest an irrational overreaction by people giving over to the mentality of the mob. When the media agrees with a moral panic -- say, on guns -- the last thing they do is call it one. Moral panics are always something those other people do. It's a bit like "censorship," a word people only use for the censorship they don't like.


But whether you call it a moral panic, a righteous people-powered movement or some other term of art, such visceral mass reactions are inevitable and perhaps necessary.

I got to thinking about this as two stories from Britain and one from China made waves here in the U.S.

A driver in North Yorkshire, England, fitted his car with a laser jammer that blocked speed cameras from giving him a ticket. He also showed the traffic camera his middle finger in a gesture that means the same thing on both sides of the Atlantic. The North Yorkshire police tracked him down, and he was charged with "perverting the course of justice." The jammer was illegal, of course, and he probably deserved a fine. But because he flipped Big Brother the bird, he got eight months in jail.

As outrageous as that story is, it pales in comparison to the story of Alfie Evans, a 23-month-old British boy with a rare neurodegenerative disorder. His doctors and the National Health Service concluded they couldn't do anything more for him and, against his parents' wishes, took him off life support. A Vatican hospital was eager to take him, and his parents were even more eager to transfer him there. The state refused, essentially kidnapping the child. The British courts support the NHS, offering not legal or moral rationales but sickening pabulum about the desirability of euthanasia or in this case infanticide. There's also much talk about how the NHS works with finite resources and is compelled by economic math to make hard decisions. The story is actually much more cruel in the specifics, but you get the point.

And that leads me to the third story. China made it official: By 2020, the government will fully implement a "social credit score" system that will use artificial intelligence and facial recognition technology to monitor, reward and punish virtually every kind of activity based upon ideological criteria -- chiefly, loyalty to the state.

It doesn't take a science-fiction writer to imagine where these trends can go. Right now, the decisions made about the rebellious driver and little Alfie are being made by humans. But will that always be the case? AI systems can send people to jail and make decisions about withholding care quite easily. Just ask the Chinese. Indeed, the humans making these decisions are just following the legal and bureaucratic equivalent of algorithms anyway.

In other words, they're thinking like machines already. Why object to letting better machines take over?

In the fourth installment of the "Dune" series, one of the characters explains why the Butlerian Jihad was necessary. "The target of the Jihad was a machine-attitude as much as the machines," Leto Atreides explains. "Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments."

That process seems well underway already, and I wonder what it will take before we get the moral panic we need.

Comment by clicking here.

Jonah Goldberg is a fellow at the American Enterprise Institute and editor-at-large of National Review Online.

Columnists

Toons