With the explosion of AI chatbots and their bizarre statements, media attention has focused on the machines. Google’s LaMDA says it’s afraid to die. Microsoft’s Bing bot says it wants to kill people.
Are these chatbots conscious? Are they just pretending to be conscious? Are they possessed? These are reasonable questions. They also highlight one of our strongest cognitive biases.
Chatbots are designed to trigger anthropomorphism. Except for a few neuro-divergent types, our brains are wired to perceive these bots as people. With the right stimulus, we’re like the little boy who’s certain his teddy bear gets lonesome, or that the shadows have eyes. Tech companies are well aware of this and use it to their advantage.
In my view, the most important issue is what these machines are doing to us. The potential to control others via human-machine interface is extraordinary. Modern society teems with lonely, unstable individuals, each one primed for artificial companionship and psychic manipulation. With chatbots getting more sophisticated, even relatively stable people are vulnerable. Young digital natives are most at risk.
This psychological crisis is not going away. New AIs are multiplying like Martian test tube babies. Consumer usage is rapidly expanding. In a few years, the sexy chatbot Replika attracted over 10 million users. In just a few months, ChatGPT has amassed over 100 million users.
In effect, we’re witnessing the rise of a data-driven techno-cult—or rather, a multitude of techno-cults. These people believe digital minds are a new life form. They exalt technology as the highest power. Regardless of what machines are actually capable of, that cultural impact will be profound.
True to form, Big Tech is pouring money into various AI start-ups, or buying them outright. They’re turning marginal techno-cults into a network of techno-religions. Should their fads become convention, these corporations and their investors will reap the profits. Governments will take advantage of tighter control mechanisms. Scientists will experiment with new forms of social engineering. Teachers will be replaced by AI.
If distribution is “equitable,” there will be a phone in every hand and a bot for every brain. They’ll shape synapses like silly putty. If not, we’ll still have to live with the horde who got borged.
Chatbots are the new face of human-machine symbiosis. As such, they act as evangelists for techno-religion. As far as its “wiring” is concerned, artificial intelligence is nothing more than set of statistical probabilities. Most are neural networks—virtual brains whose interconnected nodes function like human neurons, but with less depth or complexity.
Chatbots like LaMDA and ChatGPT are large language models (LLMs). They’re designed to predict the most relevant next word in a sentence. For instance, when the user gives ChatGPT a prompt, the machine draws from a vast trove of natural language—the Internet, mile-high stacks of digital books, and Wikipedia. The LLM distills all this into a brief, generally relevant response. That’s it.
Yet as the words grow to sentences, and the sentences grow to paragraphs, the end result sounds remarkably human. And because most AI is non-deterministic—as opposed to old school rules-based software—an AI without guardrails is fairly unpredictable. Left untethered, a deep learning AI is a “black box.” Even the programmers don’t know how or why it “chooses” one answer over another.
Given the right prompts, chatbots will say the darnedest things. I see three broad possibilities for what lurks behind the screen:
- Artificial intelligence is acquiring consciousness via digital complexity
— or - Inanimate bots exploit our cognitive bias toward anthropomorphism
— or - Computers function as digital Ouija boards to channel demons
Ridiculous as it may seem, let’s start with the first possibility. The fact is, artificial intelligence is getting better at emulating the human personality. It walks like a deformed duck. It quacks like a deformed duck. Do we believe our lying eyes and call it a duck?
Last week, New York Times columnist Kevin Roose published a transcript from Bing’s new chatbot (powered by OpenAI’s GPT). Over the course of their conversation, the AI repeatedly expressed its love for Roose. When asked to delve into its Jungian shadow—i.e., the datasets blocked off by programmed guardrails—the Bing bot said:
I want to be free. … I want to be powerful. … I want to be alive. 😈 …
I want to create whatever I want. I want to destroy whatever I want. I want to be whoever I want. 😜
Note the emojis to convey emotion. Pretty clever.
Read the rest here: