Part three of a series on artificial intelligence.
My earlier posts on this topic dealt with some fairly sophisticated text-generation AI’s from the present and (likely) near future. But most of the AI you experience is very mundane, and often slips under your radar. AI is more ubiquitous than you may hope… and significantly stupider than you may fear.
So goes the thesis of Janelle Shane‘s newish book, You Look Like A Thing And I Love You: How Artificial Intelligence Works And Why It’s Making The World A Weirder Place. It’s an easy-to-read, fun introduction to what AI is, what it is not, how it works, and how it doesn’t. And it has very cute illustrations. Here is a brief introduction to some of the concepts in the book, told much less accessibly than Shane does, alas. Please bear with me for a moment.
What we call AI is many different things doing different tasks in different ways. Much of the time you hear about AI, it’s about deep learning. What is deep learning? It’s actually just a highly lucrative rebranding of multilayer perceptron neural networks, which have been around in one form or another since the 1960’s. What is a perceptron? Inspired by neurons, perceptrons convert a series of inputs to a single output via a weighting function. Let’s imagine a shape-classification perceptron with inputs like ‘number of corners’, ‘ circumference:area ratio’, and for some reason ‘color’. This would begin with random weights–let’s say it thinks ‘number of corners’ is very very important. But each time it calls a diamond a square, the weights are adjusted, just a little, so that it will be less wrong in the future (or so we hope). Where can this go wrong? Let’s say that the data this perceptron is trained on contains a lot of shapes from playing cards. It would end up learning that color was highly predictive. In the future, when you showed it a red circle, it would probably tell you it was a heart!
A multilayer perceptron is simply when perceptrons feed into each other. So the next one in the chain would consider the output of our ‘which shape?’ perceptron, alongside other data, when performing its own task. You can probably see how this compounds errors in odd ways. Most of the time an individual perceptron’s rules are ineffable, so these things can be rather hard to debug.
Take the example of Microsoft’s image-classification AI, Azure. It was very good at identifying pictures of sheep while it was being trained. But when it was put to the test, it identified any green pasture as a picture of a sheep! It also saw giraffes everywhere. It learned that ‘giraffe!’ was often a better answer than ‘I don’t know’, probably because there were a few too many pictures of giraffes in the training data. And if you asked it how many giraffes there were, it would give you a weirdly high number–because the training data didn’t have any pictures of individual giraffes. This stuff can get very weird, very fast. As Shane illustrates in her book,
Artificial Intelligence & You: Some Light ReadingPost + Comments (79)