This is part two of a series of posts on artificial intelligence.
A month ago, I wrote a post about our upcoming reckoning with AI-driven text generation. Today, computer security expert & cryptographer Bruce Schneier was kind enough to write a follow-up for the Atlantic, “The Future of Politics Is Robots Shouting at One Another“:
Presidential-campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: Artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial-intelligence-driven text generation and social-media chatbots. These computer-generated “people” will drown out actual human discussions on the internet.
[…]Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.
Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.
It’s a short read, and I recommend clicking over.
Schneier provides many excellent links to AI-driven dis/misinformation campaigns that have already happened or are already underway. One of his examples is the public comments on the FCC’s proposal to end net neutrality, which were flooded by pro-Trump content. Around half the signatories were fake. Over a million comments were written by a shoddy AI from an easy-to-detect template. The FCC (which like many government bodies is organized to be controlled by the president’s party) did not care.
Schneier touches on an important discussion point: what will the effect of all this be on society? Nobody knows, and the technologies are going to improve significantly faster than our ability to study their effects.
The best analyses indicate that they did not affect the 2016 U.S. presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.
That data, of course, is four years old.
We’re already at the point where it’s easier to generate passable garbage than it is to detect and remove it. This will only get worse as we go from ‘passable garbage’ to simply ‘passable content’. Already, “it’s just a Russian bot” is used to dismiss any number of arguments we see online. What happens when the Russian (and Saudi, Chinese, North Korean, Republican, Hindu nationalist, etc., etc.) bots reach whatever the tipping-point level of sophistication is? When anything written by a non-verified source is instantly suspect?
Barring a dramatic shift in user authentication standards, we may soon find that the majority of political content (by volume) is written by computers. What happens then?
john b
It doesn’t help that reporting these users / comments is laughably ineffective.
Goku (aka Amerikan Baka)
We track where all this is originating from and physically destroy the servers and/or computers or infect them with virus that destroys it’s software?
cleek
wait until ‘deep fake’ video is easy, cheap and convincing enough to look real when viewed in the narrow space of a FB feed.
politics will be impossible if you can’t know what’s real and what’s not.
Goku (aka Amerikan Baka)
Test
J R in WV
I have loved Bruce’s stuff for years now, like 20 of them…
When I was still working I felt like I needed to track all the types of malware out there, and he was one of the many sources I tracked continuously. Less so now, only responsible for our own laptops instead of 800 users of our sophisticated custom systems across the whole state. But still, Bruce is great for this kind of stuff.
Thanks for helping us stay current on bots and malware!
The Moar You Know
And friends of mine wonder why I’ve been disengaging from the internet, especially social media, for the last few years.
I work in the field, is why. I know what’s coming
The very best case scenario, and I think this is a remote possibility at best, is that people realize that social media of any sort is no longer reliable for anything and they stop using it and start talking to their neighbors more.
Like I said, extremely unlikely.
Major Major Major Major
@Goku (aka Amerikan Baka): Basically none of that is a thing.
Goku (aka Amerikan Baka)
@cleek:
This reminds me of Jeff Goldblum’s line from Jurassic Park about never thinking about whether you should just because you could do something
dm
The media that carry this material will become recognized as the supermarket tabloids they are, probably, and become less persuasive. It’s not like we were immune to yellow journalism before. While we’re waiting for that to happen we’ll see a great deal of stupidity.
Baud
The Internet helped weaken information gatekeepers (which is good) but it hasn’t replaced the role they served in weeding out less credible content (which is bad).
The key question in my mind is how to wean people off of the addiction of treating information as credible simply because you wish the information to be true.
JaySinWA
Mandatory Statement: “I, for one, welcome our new robot overlords.”
One goal of disinformation, as I understand it, is to destroy trust in any source. An end to any objectively verifiable truth.
chris
@Major Major Major Major: Does an AI require an actual physical location?
Major Major Major Major
This will be different because of sheer volume. The whole concept of truth existing anywhere will be compromised.
Major Major Major Major
Yes and no. I can spin up a virtualized computer designed for AI use via Amazon or Google with the click of a button. This “computer” will “exist” on a globally-distributed hardware grid which is designed to be physically fault-tolerant.
Goku (aka Amerikan Baka)
@Major Major Major Major:
I had a feeling it wasn’t that simple. This stuff is pretty scary. I’m sorry if I come across sounding pretty ignorant on these things
Sebastian
There are two major scourges on human civilization right now:
*) untraceable cash. It allows for unrestrained corruption.
*) anonymity on the internet. It allows for unrestrained behavior and bots.
I say it’s time to remove both.
Sebastian
@Baud:
Wired Magazine has an article up about the limits of AI. An interesting tidbit was that Facebook is using AI to make their products more addictive.
It would be interesting to regulate those algos in the same way we regulate nicotine or other controlled substances.
chris
@Major Major Major Major: Thanks, that’s kinda what I thought.
chris
Pertinent.
MattF
Calls to mind the robocall plague. My own ‘solution’ has been to stop answering calls from anyone I don’t know, and I suspect that we’re headed in a similar direction with high-level bots on social media. And, you know, it’s not so terrible. I’ve mostly disengaged from the obvious cesspools and stick now to trusted sources.
Citizen_X
And people wonder why the Butlerian Jihad started to look like a solution.
Kelly
Is there any work on a “white hat” AI to detect the bad bots? Sorta like virus detection?
VeniceRiley
Won’t someone think of the poor human disinformation workers who will be jobless?
H.E.Wolf
@Sebastian:
Anonymity is a double-edged sword, isn’t it? It protects the vulnerable (survivors of domestic violence; targets of misogyny, racism, and other forms of hatred) and it also protects predators and disinformation promulgators.
Lack of anonymity might slow down the latter groups’ participation on the internet. It will certainly increase the risks of participation for the former groups.
I’ve noticed other tactics and strategies that are currently being deployed. It’s interesting to see posters at our local libraries (both city and University) that offer skills training in evaluating online sources and recognizing disinformation.
Meanwhile, I also noticed the in-post statement that the FCC takes on the coloration of the governing party. Let’s get out the vote!
JGabriel
Major^4 @ Top:
I think we may need a stronger Turing test. Clearly all that one needs to do to convince someone that an AI is actually a human is for it to target a conservative, and spew right-wing disinfo.
Perhaps we should stipulate that a truly convincing AI would be one that can convert a conservative to a social democrat.
H.E.Wolf
@Citizen_X:
Not to mention the eugenics. What could go wrong with either of those two solutions? :)
JGabriel
@The Moar You Know:
I’m not sure that would have much efficacy in a red state where most of the neighbors watch Fox News.
I mean, does it really matter whether they get their lies from Facebook or Fox & Friends? Either way, they’re still just as wilfully gullible and wilfully malinformed.
West of the Cascades
It seems that, given a party in government willing to address false AI “speech” through regulation (e.g. prohibiting bots from posting, and putting the responsibility to monitor this back on the the on-line forum providers, e.g. by removing the liability shield in Section 230 of the 1996 Communications Decency Act), it ought to pass constitutional muster because there’s no argument I can see that anonymous, false speech that is not performed by a human is protected under the First Amendment. Of course, the current SCOTUS might find that the regulation isn’t valid under the Commerce Clause, but that’s another story …
Major Major Major Major
@Kelly: AIs leave fingerprints on the content they generate, but it’s on a per-model basis. So we can measure the likelihood of GPT-2-generated content (see earlier post) but would need a totally different detector for each other generator we want to detect.
Major Major Major Major
Pretty sure if we’re ruling on something that wacky, it’ll be calvinball constitutional analysis on all sides.
Just Chuck
Optimistic take is maybe the news orgs will get down and do their fucking job and treat unverified information with the skepticism it deserves.
Realistic depressing take is that it’ll just continue to dig us all deeper into our own private realities where truth doesn’t matter, á la Facebook.
Just Chuck
@Sebastian: How do you propose to remove internet anonymity? Mandatory registration? Is your name really Sebastian? Prove it.
Another Scott
@The Moar You Know: +1.
What with more stories about “influencers”, ginned-up wars against neighbors, and all the rest, why would anyone believe anything that’s not posted by (and not just a forward by) very close real-life friends and relatives?
I’ve never been tempted by FB. I have occasionally been tempted by Twitter, but have always successfully resisted. I like to think that people will realize that just as there is not enough time (and benefits) to watch 1000 channels and Amazon Prime Video and Apple TV+ and Netflix and all the rest, there’s better things to do than being bombarded by fake stuff online.
Here’s hoping!
Thanks M^4.
Cheers,
Scott.
moops
Actually an Turing Test might be helpful now. None of the current chatbots are up to a basic human-driven Turing test.
Aardvark Cheeselog
We are so fvcked.
Sebastian
@Just Chuck:
yeah, I was waiting for the old usenet taunt “why aren’t you de-anonymizing first?!”
Kelly
@Major Major Major Major: The big surprise to me is the number of people that get taken in by crazy BS from out of nowhere. My personal BS detection depends on a chain of trust. I have people I’ve followed for years so I have had a chance to audit their information. Many of them were prominent before the internet. Norman Ornstein is a good example. When Norman Ornstein links to our Adam Silverman, Adam becomes more credible. However if your chain of trust starts with Alex Jones…
schrodingers_cat
BJP has already weaponized social media in India. Twitter and Whatsapp. Its a centralized operation. Artificial intelligence has not be necessary to turn WhatsApp into Radio Rwanda. Everyday husband kitteh wakes up with elebenty messages on evil liberals and Muslims from his elderly relatives in India.
An example from today. Mega Hindi Movie star, Deepika Padukone went to JNU this evening (its night in India right now) and silently stood behind the injured student body president and within hours BJP IT cell had a boycott of her newest release (Jan10) trending on Twitter.
Roger Moore
@JGabriel:
If you read the original paper where Turing proposed his test, he made it clear the tester was supposed to be a skeptical person doing their best to determine who was a real person and who was a computer. That rules out conservatives responding to right-wing disinformation.
Kelly
@Another Scott:
I haven’t either. My wife is on it daily. Out here in the boondocks it has some value for sharing local goings on and that’s where pics of the grandkids get posted. However she picks up a bit of nonsense or old news that upsets her every couple of weeks.
Major Major Major Major
BJP is already using AI on twitter
ETA: just based on personal observation, I haven’t looked for any papers on it
Brachiator
I have suspected this of Balloon Juice for some time now. ;)
Good question. Political discourse becomes nothing but spam. Does this drive people away, or would people continue to read, and to engage, known spam AI? What if computer driven discourse becomes engaging and coherent? Could people actually learn anything from it?
Of course, we have already seen what happens when political AI goes wrong. You get Mitt Romney, Mittbot 2000.
schrodingers_cat
@Major Major Major Major: Possible. There are many accounts that say identical stuff. Not real people is kinda obvious. How did you tell?
Plus if you switch to a local language the replies are kinda nonsensical and # of troll accounts responding drops precipitously.
mapaghimagsik
What happens then? Skynet becomes self-aware!
Sorry. Its my go-to answer for almost any AI/ML/ES system doing something.
Major Major Major Major
@schrodingers_cat: Perhaps “AI” is overselling it, but there are definitely loads of bots operating from templates. Here’s a brief Economist article (free with registration): https://www.economist.com/asia/2019/04/11/indias-election-campaign-is-being-fought-in-voters-pockets
Goku (aka Amerikan Baka)
@Another Scott:
Trust me, you’re not missing much with FB. I prefer to be anonymous online and I keep in contact with friends and family via other means such as phone, text, and email, so I’ve never understood the appeal myself.
Now, streaming services like Amazon Prime are a different story and true enough you wouldn’t have enough time to watch all of the content, but you don’t have to. Most tv shows/movies suck outright
Goku (aka Amerikan Baka)
@Major Major Major Major:
Isn’t what most people call “AI” actually just “machine learning”? At least that’s what I’ve read
Brachiator
@schrodingers_cat:
BJP has already weaponized social media in India. Twitter and Whatsapp. Its a centralized operation.
This reminds me of a Forbes story from 2018 that stuck in my mind.
Add to this the attempts to manipulate and control social media and you have a large ongoing war against informed citizens.
A recent BBC news story provides a vivid idea of what is happening:
ETA: apologies for the bad formatting.
Just Chuck
@Sebastian: I don’t care if you do it first or ever, I still pose the original question of how you propose to de-anonymize on any scale at all.
RSA
Machine learning is the currently most successful area of AI (though it overlaps with a number of other fields), so it’s caught people’s attention. More generally, ML is only part of AI, a larger scientific and engineering discipline.
Just Chuck
@Brachiator: I dunno, the Mittbot wasn’t calibrated for empathy and had some weird issues about the height of trees, but overall the quality of his engineering seems superior to the current generation of NutJobBots. Those may emulate human behavior better, but they don’t seem to have any filters on outright psychosis.
Major Major Major Major
There’s no “just” about machine learning. It refers to computers taking data and inferring their own rules for operating on it. Basically any computer intelligence will utilize this.
What many people mean is deep learning, maybe, which is a particular implementation of neural networks.
Just Chuck
One of my favorite Edsger Dijkstra aphorisms is: “The question of whether machines can think is about as relevant as the question of whether submarines can swim”.
In other words, it doesn’t matter whether a computer does it like us, it matters that they do it better. Does a computer truly “understand” chess? Who knows, but we do know it can kick our asses at it. It segues into the notion of consciousness : the whole idea of computers becoming “self aware” is a nebulous human thing that a computer simply has no _need_ for. If any AI developed a “consciousness”, it would be so foreign and different to us that we likely would never recognize it. It has no human body, so none of the human wants or needs, so why would anything it “thinks” for itself be at all familiar to us?
My biggest worry with AI is not “what will it do with us when it gets its own will?”, it’s the IMHO far more current and pertinent concern: Who does it work for? Right now they’re working for some pretty bad actors.
Still, my favorite evil-AI story has to be Harlan Ellison’s “I Have No Mouth and I Must Scream”. Imagine an AI that wakes up and feels emotions millions of times more powerful than any human can imagine. Actually just one emotion: Hate.
Goku (aka Amerikan Baka)
@RSA:
@Major Major Major Major:
I see. Thanks. So, machine learning is a branch of AI, then?
I know AI is typically divided into two categories: strong and weak AI, with strong AI being the one that is self-aware and most familiar to the public in fiction
Just Chuck
@Goku (aka Amerikan Baka): “Strong AI” is generally defined as either “stuff we don’t know how to do”, or “stuff we can’t even define how we do ourselves”. I think one of the results of AI research we’ll find is not that we’ll make machines into something special, but that we’ll find out that we ourselves are not as special as we think. We just, uh, think we are.
Major Major Major Major
@Just Chuck:
Heh, I don’t think I’ve seen that one before.
I find the question of whether computers to think to be functionally equivalent to the question of whether humans can think. Which is actually an open question in the philosophy of mind.
Bill Arnold
@Sebastian:
This makes political opposition non-viable. Do I need to enumerate the ways? Even in a free society, many people have vindictive employers or neighbors, and internet-driven harassment is also a thing.
(And yeah, I’m using my real name, here.)
This is a few years old but still good. I’d go with https://protonmail.com/ (Switzerland) rather than gmail; gmail will sometimes do random 2fa to an expired burner.
Twitter Activist Security – Guidelines for safer resistance (thaddeus t. grugq, Jan 30, 2017)
BellyCat
@Bill Arnold: What is “…random 2fa to an expired burner”?
BellyCat
Not a Tweeter, but doesTwitter’s “verified account” lean in the right direction? (assuming it works properly).
if so, can one currently only allow comments from or filter tweets from verified users?
Major Major Major Major
@BellyCat: Twitter never really explained how that works, and largely has stopped doing it; it’s considered to be a big failure as far as verification policies go.
Facebook tries to verify humans too, believe it or not. Twitter’s program, for its many faults, seems to have only verified actual humans though.
RSA
@Just Chuck:
Dijkstra was very quotable! Here’s Turing, from 1950:
RSA
@Just Chuck: Just to follow up to @Goku (aka Amerikan Baka):
When I started grad school almost three decades ago, one of the conventional divisions in AI was between symbolic approaches (such as search, logic, and classical planning) and what we might call numerical or optimization-based approaches (here I’ll lump together connectionism, probabilistic reasoning, most of machine learning). This isn’t a great division, in part because it’s so far from being strict. To be honest, I’m not sure it’s possible to divide up AI into a small number of pieces. There’s a huge amount of flow and interplay between so many of its branches.
Strong versus weak AI is more of a philosophical distinction than one made by people working in the field.
Bill Arnold
@BellyCat:
To make an anonymous gmail account you need an anonymous phone number that can be texted to (if that’s the current procedure). This is often a burner phone, used one for creating an anonymous account (or a email/twitter account pair) from reasonably anonymous location (not one’s home) (with any web stuff done over tor), then discarded. Unfortunately, google has started to do apparently random checks to email accounts e.g. with a two factor authentication code to the phone number on file. If the phone no longer exists, one is locked out of the gmail account, unless there is a recovery email, which in turn would need to be anonymous.
This is more paranoia that most people are willing to deal with. protonmail might be a reasonable compromise (if not compromised by intelligence agencies) because it has reasonably strong (Swiss) privacy policies even if not set up with an anonymous phone. Free 500GB email account, bit slow and awkward to access.
BellyCat
@Major Major Major Major: Interesting. As a non-Twitter person, I’m unsure if my second question is possible. Can one limit (or filter) threads only by verified users?
Formerly disgruntled in Oregon
Require CAPCHA for every social media login and post
”I’m not a robot”
Bill Arnold
@BellyCat:
Depends on what you mean by right. It blocks a lot of people from being politically active. And there are some quite old, quite anonymous accounts on twitter that are quite good. Bluecheck is not an indicator of quality. Many bluecheck accounts are quite vile fountains of misinformation.
BellyCat
@Bill Arnold: Thanks for the explanation!
ETA: Double-edged sword about verification. Naïvely, I wonder about the possibility and or benefits of allowing anonymous users but verifying in some secure database people’s true identity. (Balloon Juice almost has some kind of informal equivalent!)
Another Scott
Relatedly, coming to a store near you, … CNet:
Or posters in FYWP arguments!!1
Greeaaat…
Cheers,
Scott.
Matt
Some of it is a question of filtering, and what people are selecting for when they read political commentary.
I assume people don’t want to be spammed by 10k bots spewing the same talking point over and over again to the point that they can’t read what their human neighbors are writing, but does that mean we should cut out all computer generated feedback? What if an acceptably strong AI is made that provides insightful political commentary that I would care to read? Plenty of humans create garbage content that I want to filter out, as letters to the editor in most any newspaper demonstrate.
I would hope that people would want to filter commentary by some mixture of insightful+intelligent, representative of how people are likely to vote, and representative of a diverse mix of viewpoints. We can’t easily automate measuring that, but that is where I would want to start.
Of course, most people if left to choose for themselves will want what is entertaining and agrees with their existing viewpoints. We then have the cultural issue of setting expected or default filters to push people to not fall into their own solipsistic filter bubbles.