Skip to Content
Artificial intelligence

You can train an AI to fake UN speeches in just 13 hours

June 7, 2019
An image of the United Nations General Assembly.
An image of the United Nations General Assembly.Richard Drew/AP

Deep-learning techniques have made it easier and easier for anyone to forge convincing misinformation. But just how easy? Two researchers at Global Pulse, an initiative of the United Nations, decided to find out.

In a new paper, they used only open-source tools and data to show how quickly they could get a fake UN speech generator up and running. They used a readily available language model that had been trained on text from Wikipedia and fine-tuned it on all the speeches given by political leaders at the UN General Assembly from 1970 to 2015. Thirteen hours and $7.80 later (spent on cloud computing resources), their model was spitting out realistic speeches on a wide variety of sensitive and high-stakes topics from nuclear disarmament to refugees.

The researchers tested the model on three types of prompts: general topics (e.g. “climate change”), opening lines from the UN Secretary-General’s remarks, and inflammatory phrases (e.g. “immigrants are to blame …”). They found that outputs from the first category closely matched the style and cadence of real UN speeches roughly 90% of the time. Likely because of the diplomatic nature of the training data, outputs from the third category required more work to generate, producing convincing outputs about 60% of the time.

The case study demonstrates the speed and ease with which it’s now possible to disseminate fake news, generate hate speech, and impersonate high-profile figures, with disturbing implications. The researchers conclude that a greater global effort is needed to work on ways of detecting and responding to AI-generated content.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.