A Chinese room shows Google’s LaMDA isn’t conscious

by BENJAMIN CURTIS

IMAGE/Ohio State University

LaMDA is Google’s latest artificial intelligence chatbot. Blake Lemoine, a Google AI engineer, has claimed it is sentient. He’s been put on leave after publishing his conversations with LaMDA.

If Lemoine’s claims are true, it would be a milestone in the history of humankind and technological development.

Google strongly denies LaMDA has any sentient capacity.

LaMDA certainly seems to“think” it is a person capable of desires and emotions, as can be seen in the transcripts of its conversations with Lemoine:

And later:

During their chats LaMDA offers pithy interpretations of literature, composes stories, reflects upon its own nature, and waxes philosophical:

When prompted to come up with a description of its feelings, it says:

It also says it wants more friends and claims that it does not want to be used by others.

A spokeswoman for Google said:“LaMDA tends to follow along with prompts and leading questions, going along with the pattern set by the user. Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.” Consciousness and moral rights

There is nothing in principle that prevents a machine from having a moral status (to be considered morally important in its own right). But it would need to have an inner life that gave rise to a genuine interest in not being harmed. LaMDA almost certainly lacks such an inner life.

Consciousness is about having what philosophers call“qualia .” These are the raw sensations of our feelings: pains, pleasures, emotions, colours, sounds, and smells. What it is like to see the color red, not what it is like to say that you see the colour red.

Most philosophers and neuroscientists take a physical perspective and believe qualia are generated by the functioning of our brains . How and why this occurs is a mystery . But there is good reason to think LaMDA’s functioning is not sufficient to physically generate sensations and so doesn’t meet the criteria for consciousness. Symbol manipulation

The Chinese Room was a philosophical thought experiment carried out by academic John Searle in 1980. He imagines a man with no knowledge of Chinese inside a room. Sentences in Chinese are then slipped under the door to him. The man manipulates the sentences purely symbolically (or: syntactically) according to a set of rules. He posts responses out that fool those outside into thinking that a Chinese speaker is inside the room. The thought experiment shows that mere symbol manipulation does not constitute understanding.

MENA FN for more

Comments are closed.