Zum Inhalt springen

Competing Views of the ChatGPT Revolution "Artificial Intelligence Will Destroy Truth"

Oren Etzioni is an AI researcher, while his father Amitai Etzioni was one of the most important intellectual voices in the U.S. prior to his recent passing. In Amitai Etzioni's final interview, he and his son discuss the dangers of artificial intelligence.
Interview Conducted by Alexander Demling
A sculpture made of electronic waste in Tel Aviv.

A sculpture made of electronic waste in Tel Aviv.

Foto: Amir Cohen / REUTERS

Amitai Etzioni was one of the most influential U.S. intellectuals of the 20th century, his son Oren, 59, is a pioneer in the study of artificial intelligence. Both have given numerous interviews in their lives, but this is the first one they have given together. In mid-May, the two of them received a DER SPIEGEL correspondent in the father's apartment near the Watergate Hotel in Washington, D.C.

Amitai (left) and Oren Etzioni: "You can no longer believe your own eyes and ears."

Amitai (left) and Oren Etzioni: "You can no longer believe your own eyes and ears."

Foto:

Lexey Swall / DER SPIEGEL

It became a conversation about the issues they’ve spent their lives working on, focusing on the ChatGPT revolution and its consequences for our society. What nobody suspected at the time: It would be Amitai Etzioni's last big interview. He died two weeks later at the age of 94.

DER SPIEGEL 28/2023

The article you are reading originally appeared in German in issue 28/2023 (July 8th, 2023) of DER SPIEGEL.

SPIEGEL International

DER SPIEGEL: The first question goes to both of you, father and son Etzioni. Are you afraid of artificial intelligence?

Oren Etzioni: Artificial intelligence is not a creature like the Golem or Frankenstein's monster, it's just a tool, albeit a very powerful one. I'm not afraid of the tool itself, I'm afraid of how people use it. Malicious actors or rogue states for example.

Amitai Etzioni: At some point during the Industrial Revolution, man lost control of technological and economic progress. He's been struggling to regain that control ever since. Artificial intelligence makes this race even faster and harder to win. At the same time, new social movements are emerging that do not want a society in which these forces dominate us.

DER SPIEGEL: You see people protesting against artificial intelligence?

Amitai Etzioni: At least harbingers: For me, this also includes the people who refuse to return to the office after the pandemic. They say: I want to spend more time with my family, I want to play video games instead of spending time in traffic. In the end, AI is just the tip of the iceberg for a much larger conflict in our society: Is the human at the center or the machine?

Oren Etzioni: My father and I don't quite agree on this point. I think the idea of AI conspiring against humanity in a data center and running amok is science fiction. Technology is certainly not neutral, but in the end, people make the decisions. And the powerful people operating these tools aren't politicians, they're tech entrepreneurs. Mark Zuckerberg and Elon Musk have that power, not AlphaGo. (Eds. note: AlphaGo is the AI program that defeated the world’s best in the board game Go in 2017.)

DER SPIEGEL: Your colleague Geoffrey Hinton, who laid the scientific basis for the current AI boom, has grown fearful of his creation.

Oren Etzioni: With large language models like GPT-4, you have to distinguish between intelligence and autonomy. Intuitively, we often throw the two things together, because humans are intelligent and autonomous. GPT-4 is very sophisticated and somewhat intelligent, but it has no will of its own. How much autonomy it gets is a political question, not a technological one. People decide that.

Amitai Etzioni: I have to disagree. Every new drug in the U.S. needs approval from the Food and Drug Administration. But if some people in San Francisco invent a new technology for better deepfakes tonight, we'll have to live with it – even if it goes against our values and political will. We couldn't even ban them if we wanted to.

DER SPIEGEL: Why not?

Amitai Etzioni: That ship has sailed, the technology is in the world. Even if we had a treaty with China, would Russia stick to it? Venezuela? Yemen?

DER SPIEGEL: Even OpenAI boss Sam Altman is calling for his industry to be regulated.

Amitai Etzioni: This is just PR.

Oren Etzioni: I don't think that's the best way either. We need targeted regulation. Should a single AI law cover everything from chatbots to controlling our nuclear weapons? We should rather adapt our existing laws on copyright, on drug research and all these areas to the new reality.

Amitai Etzioni: When I give lectures in Germany, I always ask who has ever been asked by a company under data protection laws whether they can pass on their data. Not a single hand ever goes up. Because the laws have so many loopholes. If there are overly broad AI laws in everyday life, I fear that the authorities who are supposed to implement them will be hijacked by the industry – just like the big U.S. banks have done with the financial regulators here.

DER SPIEGEL: Let's talk about the dangers posed by people using these powerful tools. What are you most afraid of?

Oren Etzioni: The next presidential election. And every election after that. Gutenberg's printing press made copying information virtually free, the internet made global distribution free, and now information is becoming free to produce. Voters can be bombarded with messages that sound like they were spoken by Elon Musk or Angela Merkel. Such applications need to be regulated, not GPT-4.

Amitai Etzioni: AI will destroy the truth. You can no longer believe your own eyes and ears. This is a challenge for democracy and for our community.

DER SPIEGEL: Which liar is more dangerous for democracy: Donald Trump or ChatGPT?

Amitai Etzioni: In the short term, that award still goes to Trump. He has a real shot at being president again. ChatGPT likely never will. But in the long term, AI poses the greater threat.

Oren Etzioni: A charismatic demagogue who can manipulate people is also more dangerous in the long term. ChatGPT doesn't write rousing speeches, its output is pretty average, so far at least. But it can run countless Trump-friendly chatbots on Facebook and Twitter at the same time. This is where the real danger comes from: Donald Trump combined with the money to pay for a million mini-GPTs.

Amitai Etzioni: America's Founding Fathers feared the mob being seduced by a charismatic leader. That's why they established an indirect democracy: Every two years people are allowed to vote, and in between times, they keep their mouths shut. Technology challenges this system because a leader can now reach the masses directly. Trump was already able to do this with Twitter, and he will be even more effective with chatbots.

DER SPIEGEL: AI-generated images of the Pope in a white puffy jacket and programs that clone voices from a few speech samples are already circulating. How do we control this tidal wave of fake content?

Oren Etzioni: There is no panacea, regulation can only be part of the solution. One could, for example, require digital authentication, a kind of watermark that all AI-produced content must include. The technology for this is ready. But the regulators' problems remain: They are national, while the problem is international. We need other means.

An AI deep fake of Pope Francis that went viral.

An AI deep fake of Pope Francis that went viral.

KI generiert: Reddit

DER SPIEGEL: What might they look like?

Oren Etzioni: Education. We must learn how to use this technology. Grandparent scammers today can sound like the real grandson. The other day, a purported colleague texted me to ask for a favor. It wasn't until he asked me about Apple gift cards that I figured it out. With every phone call we must ask ourselves: Is this authentic?

Amitai Etzioni: When you watched television as a little boy, I would say: Think about how this advertisement is trying to manipulate you.

DER SPIEGEL: Did you still have to buy him candy?

Oren Etzioni (laughs): It's a challenge to this day.

DER SPIEGEL: ChatGPT doesn't just lie when people tell it to. For example, the chatbot recently accused an Australian mayor of a corruption scandal that the man had actually uncovered himself. Why does program so wholeheartedly produce such absolute nonsense?

Oren Etzioni: It is difficult to calibrate the model's self-awareness because it has no concept of reality. When ChatGPT decides whether to designate the sky as blue or gray, it calculates with probabilities that differ little from each other. The model must learn over time when to say: "I don't know.” And don’t forget: This technology is extremely young. There will be missteps. Think of nuclear technology. I think everyone now agrees that Hiroshima and Nagasaki were missteps.

Amitai Etzioni: We need to find ways to teach the models ethics. Not what a philosopher thinks of, but the values by which different religious or social groups actually live. Different models may then reflect their different consensus on values.

DER SPIEGEL: Elon Musk allegedly wants to develop a TruthGPT because he thinks ChatGPT is too politically correct. Will we have right and left-leaning AI?

Oren Etzioni: Yes, in the end these models are primarily mirrors. When we hold this mirror up to our society, we see Republicans, Democrats and all the unsavory characters too.

DER SPIEGEL: That's also a product of these large language models likely being trained with countless posts from Twitter and Reddit – not necessarily places where people show their best side.

Oren Etzioni: As programmers we say: "Garbage in, garbage out." But the problem is solvable: People are constantly evaluating the output of models and thus, over time, teach it their values. But it is impossible to give ChatGPT an ultimate value framework because we, as a society, are at odds about it ourselves.

DER SPIEGEL: The big data analysis company Palantir offers armies an AI platform that develops battle plans and analyzes enemy targets. Palantir boss Alex Karp calls generative AI a revolution that "will raise and sink ships.”

Amitai Etzioni: Imagine that: AI technology coupled with nuclear weapons and space technology that can identify every single point on Earth. Mankind must find an international agreement that says: With offensive weapons of such destructive power, artificial intelligence alone must not make an irreversible decision. A human being has to have the last word here. But we are already in an indirect war with China. I therefore do not rate the chances of such an agreement as very high. That worries me.

Oren Etzioni: You have to make a clear distinction between defensive and offensive weapons. Autonomous offensive weapons threaten our security. Intelligent defense weapons, even autonomous ones, save lives. They protect kindergartens and hospitals in the conflict between Israel and the Palestinians or in the Ukraine.

DER SPIEGEL: A few years ago, employees at Google, including AI experts, went on strike because they were asked to work on a contract from the U.S. military.

Oren Etzioni: I don’t have much understanding for their position. I'm not happy that more and more powerful weapon systems are being developed. But I’d rather they be in our hands than in those of totalitarian regimes. Society must define where we draw the line. We technologists have a responsibility to engage in this discussion with a nuanced perspective, rather than burying our heads in the sand.

DER SPIEGEL: The global fascination with ChatGPT has triggered a race between Microsoft, Google and Meta for the best AI models. Is the technology too dangerous to be left to the private economy?

Amitai Etzioni: We will see an even greater concentration of wealth in the hands of the few. Whether Microsoft or Google prevails in the end isn’t important at all. What matters is whether the industry as a whole can prevent the U.S. Congress from making effective rules for it.

Oren Etzioni: The fundamental truth of technology remains: It has lifted hundreds of millions of people out of poverty.

Amitai Etzioni: The question is, will it continue to do so? In the past few centuries, technology has taken us from the Middle Ages to an affluent society. What will the next centuries bring? Just more gadgets?

DER SPIEGEL: Do you see artificial intelligence primarily as a threat to our society?

Amitai Etzioni: Not at all. Fake emotions can be helpful in building empathic computers. So far, we have parked old people in old people's homes and let them be alone there because it's easier to manage. Is it better for a computer to say "I'm happy to listen to you" than no one? Yes, I would prefer that.

DER SPIEGEL: The virtual companion has so far been little more than a vision of science fiction.

Amitai Etzioni: Nothing takes a toll on our mental health like loneliness. Why should we only help people when they are desperate and developing depression? For this reason, artificial intelligence cannot simply be banned. We need to think about the usefulness of rules as well as their harm.

DER SPIEGEL: You have published articles on artificial intelligence together. How do you discuss these issues with each other?

Oren Etzioni: As academics, we are used to operating within the narrow silos of our discipline and our jargon. I'm lucky that my father is extremely curious and asks me questions. Sometimes we disagree, but that gets us talking about it.

Amitai Etzioni: We have a large family, and we get together every year for a two-week vacation. We need this time to talk about the limits of our discipline. That’s why we called a series of essays on the design of AI systems the Marbella Papers – because they were created on vacation in Spain.

DER SPIEGEL: We thank you both for this interview.