How AI Creates Itself
By Marcel Kückelhaus
October 2, 2024
If I told you I talked to my computer yesterday, would you think I had lost my marbles? Maybe not. Firstly, who hasn’t yelled at their devices when something didn’t work as it should or tried to coax them into lasting only a tiny bit longer? (“Come on my sweet little car. Just a couple of meters further…”) Secondly, during the last two years Large Language Models (LLMs) in the form of ChatGPT, Gemini, or Claude have found their way into our lives, passing the Turing test, and leaving us marvelling at these new technologies.
Thinking of our experience with Siri, Alexa, Cortana, or the typical chatbot on your towns administration website, would you have thought that all of a sudden you could have a sensible conversation with a machine? I mean, until recently, the most intelligent answer we ever got from Alexa et al. was a “Sorry, I didn’t get that. Can you please repeat your request?”
The perfect illusion
Most people know by now that LLMs are algorithms that generate text based on probability. Maybe a bit like the old T9 if you’re old enough to remember phones with keys to press. Based on a large data set, the algorithm predicts which word is most likely to proceed the one before. Hence, Large Language Models are sometimes referred to as “stochastic parrots”. The model knows nothing and understands nothing, but its calculations create text that makes us believe the opposite.
When you talk to an LLM, do you not get the impression that you talk to a person? It seems that Google has almost perfected the illusion. After initiating a conversation with Google’s chatbot Gemini, it doesn’t take long and the LLM generates an answer like: “We have to take into consideration the negative implications of AI-systems.” Who is this We it’s referring to? If it means We humans than why is Gemini part of this group? And why is Gemini not including itself when it mentions AI-systems? When we as users ask this question, Gemini replies: “I am aware that I’m not a person.” How can something refer to itself as I, have (self-)awareness, but not be human?
The use of personal pronouns creates the illusion that on the other side of the line were not a code, but a person: someone who is aware of themselves and others. Thinking about the machine as a living being might give you the creeps, but not to worry, Gemini has anticipated that when it says: “We have to address those fears.” Great, now it has even transformed into my therapist.
When AI has character
But it’s not only the pronouns that define the chatbot, it’s also its character traits. If I told you this in person, you might tell me: “But how could a chatbot have character traits? It does not have a character. It is a machine!” And I’d tell you: “You are absolutely right.” However, Gemini tells us about its own character traits: “I’m more efficient, I work faster, my judgement is not clouded by subjective feelings, the data I consist of represents human culture.” Conversely, this could mean that humans are lame ducks with too many feelings, and what defines us fits in a jar.
Suddenly, it’s not the expert anymore who defines what a large language model is, but it’s the LLM that does it on its own accord. This does, of course, not mean that a chatbot has character traits or even a consciousness. The answers an LLM generates, however, have an impact on how we perceive it: As a machine, a person or even a trustworthy companion?
The next time you interact with a chatbot, what linguistic cues can you identify that the chatbot uses to seem more human? Some of the above or maybe something completely different?