As evidence continues to mount that we are living in a cyberpunk dystopia, I’ve decided to do a series of posts on artificial intelligence. This one is about text generation.
I’m something of an AI skeptic. While they’ve proven to be very effective in some areas, these are mostly tasks with a narrow scope and well-defined set of rules, such as playing chess and go, or determining whether something is a picture of a cat. More complex tasks like driving have proven to be a lot harder. Driving in particular has very ambiguous inputs that are highly context-dependent, two things that AIs have trouble with.
(Let’s please continue reading and not get hung up on the political and ethical roadblocks to self-driving cars.)
AIs have also historically had trouble generating text and images. Not because these are magical tasks that only humans can perform; it’s simply been difficult to make computers good at them. Well, much to my surprise, this may be coming to an end in the next few years. And I don’t mean that in the “self-driving cars are always five years away” sense.
In the last couple of years, computerized content generation has made some remarkable advances. Deepfakes–videos that believably replace one person with somebody else–are proliferating, and fairly easy to make. AIs are frighteningly good at generating faces now, too.
But what about text? Google Translate has gotten much more sophisticated lately as Alphabet has perfected the neural networks that power it. But is that more like writing, or more like playing go? I’d say it’s in between.
Which brings us to writing. Earlier this year, the nonprofit group OpenAI built a text-generation neural network that was, in their opinion, too frightening to release. I called malarkey, but now that they’ve been gradually releasing more sophisticated versions of their model, I call… less malarkey. What this means for you, and some examples, below the fold.