This is part two of a series of posts on artificial intelligence.
A month ago, I wrote a post about our upcoming reckoning with AI-driven text generation. Today, computer security expert & cryptographer Bruce Schneier was kind enough to write a follow-up for the Atlantic, “The Future of Politics Is Robots Shouting at One Another“:
Presidential-campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: Artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial-intelligence-driven text generation and social-media chatbots. These computer-generated “people” will drown out actual human discussions on the internet.[…]
Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.
Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.
It’s a short read, and I recommend clicking over.
Schneier provides many excellent links to AI-driven dis/misinformation campaigns that have already happened or are already underway. One of his examples is the public comments on the FCC’s proposal to end net neutrality, which were flooded by pro-Trump content. Around half the signatories were fake. Over a million comments were written by a shoddy AI from an easy-to-detect template. The FCC (which like many government bodies is organized to be controlled by the president’s party) did not care.
Schneier touches on an important discussion point: what will the effect of all this be on society? Nobody knows, and the technologies are going to improve significantly faster than our ability to study their effects.
The best analyses indicate that they did not affect the 2016 U.S. presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.
That data, of course, is four years old.
We’re already at the point where it’s easier to generate passable garbage than it is to detect and remove it. This will only get worse as we go from ‘passable garbage’ to simply ‘passable content’. Already, “it’s just a Russian bot” is used to dismiss any number of arguments we see online. What happens when the Russian (and Saudi, Chinese, North Korean, Republican, Hindu nationalist, etc., etc.) bots reach whatever the tipping-point level of sophistication is? When anything written by a non-verified source is instantly suspect?
Barring a dramatic shift in user authentication standards, we may soon find that the majority of political content (by volume) is written by computers. What happens then?