I’m a regular watcher of Last Week Tonight with John Oliver, so in February I was looking forward to his take on “AI” and the large language models and image generators that many people have been getting excited about lately. I was not disappointed: Oliver heaped a lot of much-deserved criticism on these technologies, particularly for the ways they replicate prejudice and are overhyped by their developers.
What struck me was the way that Oliver contrasted large language models with more established applications of machine learning, portraying those as uncontroversial and even unproblematic. He’s not unusual in this: I know a lot of people who accept these technologies as a fact of life, and many who use them and like them.
But I was struck by how many of these technologies I myself find problematic and avoid, or even refuse to use. And I’m not some know-nothing: I’ve worked on projects in information retrieval and information extraction. I developed one of the first sign language synthesis systems, and one of the first prototype English-to-American Sign Language machine translation systems.
When I buy a new smartphone or desktop computer, one of the first things I do is to turn off all the spellcheck, autocorrect and autocomplete functions. I don’t enable the face or handprint locks. When I open an entertainment app like YouTube, Spotify or Netflix I immediately navigate away from the recommended content, going to my own playlist or the channels I follow. I do the same for shopping sites like Amazon or Zappos, and for social media like Twitter. I avoid sites like TikTok where the barrage of recommended content begins before you can stop it.
It’s not that I don’t appreciate automated pattern recognition. Quite the contrary. I’ve been using it for years – one of my first jobs in college was cleaning up a copy of the Massachusetts Criminal Code that had been scanned in and run through optical character recognition. For my dissertation I compiled a corpus from scanned documents, and over the past ten years I’ve developed another corpus using similar methods.
I feel similarly about synonym expansion – modifying a search engine to return results including “bicycle” when someone searches for “bike,” for example. I worked for a year for a company whose main product was synonym expansion, and I was really glad a few years later when Google rolled it out to the public.
There are a couple of other things that I find useful, like suggested search terms, image matching for attribution and Shazam for saving songs I hear in caf?s. Snapchat filters can be fun. Machine translation is often cheaper than a dictionary lookup.
Using these technologies as fun toys or creative inspiration is fine. Using them as unreliable tools that need to be thoroughly checked and corrected is perfectly appropriate. The problem begins when people don’t check the output of their tools, releasing them as completed work. This is where we get the problems documented by sites like Damn You Auto Correct: often humorous, but occasionally harmful.
My appreciation for automated pattern recognition is one of the reasons I’m so disturbed when I see people taking it for granted. I think it’s the years of immersion in all the things that automated recognizers got wrong, garbled or even left out completely that makes me concerned when people ignore the possibility of any such errors. I feel like an experienced carpenter watching someone nailing together a skyscraper out of random pieces of wood, with no building inspectors in sight.
When managers make the use of pattern recognition or generation tools mandatory, it goes from being potentially harmful to actively destructive. Search boxes that won’t let users turn off synonym expansion, returning wildly inaccurate results to avoid saying “nothing found,” make a mockery of the feature. I am writing this post on Google Docs, which is fine on a desktop computer, but the Android app does not let me turn off spell check. To correct a word without choosing one of the suggested corrections requires an extra tap every time.
Now let’s take the example of speech recognition. I have never found an application of speech recognition technology that personally satisfied me. I suppose if something happened to my hands that made it impossible for me to type I would appreciate it, but even then it would require constant attention to correct its output.
A few years ago I was trying to report a defective street condition to the New York City 311 hotline. The system would not let me talk to a live person until I’d exhausted its speech recognition system, but I was in a noisy subway car. Not only could the recognizer not understand anything I said, but the system was forcing me to disturb my fellow commuters by shouting selections into my phone.
I’ve attended conferences on Zoom with “live captioning” enabled, and at every talk someone commented on major inaccuracies in the captions. For people who can hear the speech it can be kind of amusing, but if I had to depend on those captions to understand the talks I’d be missing so much of the content.
I know some deaf people who regularly insist on automated captions as an equity issue. They are aware that the captions are inaccurate, and see them as better than nothing. I support that position, but in cases where the availability of accurate information is itself an equity issue, like political debates for example, I do not feel that fully automated captions are adequate. Human-written captions or human sign language interpreters are the only acceptable forms.
Humans are, of course, far from perfect, but for anything other than play, where accuracy is required, we cannot depend on fully automated pattern recognition. There should always be a human checking the final output, and there should always be the option to do without it. It should never be mandatory. The pattern recognition apps that are already all around us show us that clearly.