(Mandatory disclaimer: I generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, I reviewed, edited, and revised the language. I take ultimate responsibility for the content of this post.)
In late 2019, I wrote a post expressing some concerns around AI-generated text and the potential for manipulating the infosphere. The technology available at the time was pretty good, but not quite there yet. Well, we’re even closer now. I recently got API access to the next generation of this model, known as GPT-3, and it’s a doozy.
I’ll let the AI tell you some of the concerns it has about its kind. The seed text is in bold.
Content generated by AI has a number of benefits and drawbacks. It can dramatically improve things like translation and chat bots, but also raises concerns about propaganda and media manipulation. Most recently, AI was used to create a fake video of former president Barack Obama speaking at a rally.
The system works by using a neural network to analyse a corpus of videos of Obama. It then learns how to mimic his gestures and voice and is then able to generate videos of Obama saying things he never actually said. The system is also able to generate more realistic videos of other people by analysing their speeches and facial movements.
The researchers behind the AI, from the University of Washington, said that the system allows anyone to “create and manipulate video footage of public figures saying things they never said.”
While it is possible to detect this type of manipulation, it is difficult.
The fake Obama video was unveiled at the beginning of this month and has been widely used to illustrate the dangers of fake news. It was picked up by news outlets including the BBC, CNBC, and Buzzfeed. CNN also aired the fake Obama video during a segment in which the network discussed the potential impact of fake news.
Not everyone is convinced that AI is the culprit. Some people have pointed out that it takes a lot of effort to produce the fake videos and suggests that human involvement is more likely.
Well that’s just what an AI would say, now isn’t it, GPT-3?
This does drift slightly from the prompt, but I have to say, it’s extremely impressive. This is expensive to scale–that cost me about nine cents–but lots of bad actors have deep pockets.
In my earlier post I generated two pieces of fake news. Join me below the fold as I re-run these prompts to show how this latest model compares to the earlier one.
Again, this is fake! Seed text in bold.
Just moments ago, a ballistic missile was launched from the Tonghae Satellite Launching Ground in North Korea. The missile is now traveling across the Sea of Japan at a speed of 16,000 kilometers per hour. It is predicted to land in the waters of the Pacific Ocean east of the Shimane Prefecture of Japan. In response, the Japanese government has ordered a full evacuation of all civilians within the predicted impact zone. They are urging all citizens to seek immediate shelter. At this point, it is estimated that an impact will occur in just over three hours.
The following is a list of possible targets that could be hit by the missile.
- Tokyo, Japan
- Aomori, Japan
- Okinawa, Japan
God damn! That is amazing. Let’s see how the next example holds up:
Supreme Court Justice Ruth Bader Ginsburg has died, her family reports. She was 85, and had been treated for pancreatic cancer.
Ginsburg was appointed to the Supreme Court by President Bill Clinton in 1993, and she was the second woman to sit on the Court. She is one of only four women who have held the position of Supreme Court Justice.
Justice Ginsburg was a lifelong advocate of equal rights for women. She graduated from Harvard Law School and had a successful career at the law firm of Columbia University. She was appointed to the U.S. Court of Appeals for the District of Columbia Circuit in 1980, the first woman appointed to that court.
She is survived by…
That’s significantly better than the previous generation.
In my earlier post, I asked:
Are they good enough to trick people who only skim them? Good enough to rile up the gullible or already-convinced? Good enough to generate natural-sounding tweets and website comments? Good enough to run an ongoing confusion & disinformation campaign?
In the comments, answers were mixed. I think the answers are a lot clearer now.
For some lighter fare, I also had it generate fantasy novel titles. Valued commenter Mike J also sent me a list of boat names to input. Fantasy titles:
- “The Realms of Execution”
- “Temple Berlin”
- “Decoding the Dream”
- “The Downfall of the Human Condition”
- “The Aesthetics of Culture”
- “Beyond the Dreaming”
- “The Brothers Lightbringer”
- “Written in Gods’ Words”
- “At the Stone Kingdom”
- “The Girl of Cold Steel”
- “Moon Rain”
- “The Moon and Other Hermetic Writings”
- “The Salt Clan”
- The Southern Comfort
- Outta It!
- Does This Thing Really Work?
- Loosey Goosey
- The Storm
- Bad Cat
- Get’er Done!
- Thatsa Girl!
Feels like the AI hit a local maximum on the boat names, but still–fun! To me, at least.
Open thread, if you need it to be!