notes

evergreen

Empty Pointers and Constellations of AI

On the fuzziness of calling things “artificial intelligence” and moving the goalposts

People in the field instead talk about specific technologies and techniques: machine learning, deep learning, neural networks, natural language processing (NLP), transformers, embeddings, reinforcement learning, large language models (LLM), robotics, latent space diffusion, and generative adversarial networks (GAN).

AI becomes a catchall term for this wide array of disciplines that are only united by a hand-wavey philosophy about getting machines to act smart. Prominent researchers and poetically a "constellation of technologies." A bit like how “cancer” is the catch-all phrase for a group of diseases – each with its own set of characteristics, prognoses, and treatment options – but we hear it as a blanket term:

Artificial intelligence traditionally means any computational system that mimics human-like intelligence. At the moment we take that to mean understanding language, recognizing patterns, perceiving the world, reasoning, and decision-making. But we end up persistently moving the goalposts on “intelligence.”

All computing is to some degree artificially intelligent. At one point a TI-83 calculator was artificial intelligence. When Google search first came out it was artificial intelligence. Frankly, a modern refrigerator counts as artificial intelligence: it's programmed to achieve an objective by taking action and uses environmental feedback to adjust its behaviour. These things are now so banal and pervasive we are unimpressed by their “intelligence.”

AI becomes a stand-in term for whatever technologies and techniques are new, shiny, and just beyond the grasp of our understanding. We use it to gesture at a future we cannot fully comprehend or currently realise. As soon as we do, it will no longer be “AI.”

Many people have pointed to this lack of specific meaning behind the moniker. Here's an excerpt from a (paraphrased) between and on AI as a shifting continuum:

We're in a moment where people are getting hype about very specific tools like large language models and neural networks and GANs but have trouble distinguishing between them. To them, it's all “AI”. Conversations often devolve into confusion or inane generalisations about “robots taking our jobs” simply because we can't clarify which part(s) of the constellation people are pointing at.

It's become an empty pointer. We attempt to point at possible futures and find ourselves all looking in opposite directions.

Want to share?

Mentions around the web