LaMDA: The Hype That Google AI Is Sensitive

0

Artificial Intelligence (AI) has been seen as the key to the future when it comes to mimicking the human brain or becoming sentient. Recently, Google AI engineer Blake Lemoine, publicized on Google LaMDA, sparked a discussion about AI models reaching consciousness. But what’s more important here with these sparks is the serious concern about the ethics of AI.

So what exactly is LaMDA, and why is it called sensitive?

LaMDA is Google’s language model for dialog applications. It is an advanced big language model based chatbot that can ingest trillions of words from the internet to inform its conversation. It is built on a massive corpus of data or text explored from the Internet. It is a statistical abstraction of the whole text. So when this system or model is interrogated, it takes the text written at the beginning, tries to continue based on the words related to each other, and predicts which words it thinks will come next. So it’s a suggestive pattern that continues through to the text you insert. LaMDA has similar skills to BERT and GPT-3 language models and is built on Transformer, a neural network architecture invented by Google Research in 2017. The model produced through this architecture has been trained to read words, sentences and paragraphs, linking words together and predicting the words that would come up next in conversation.

So how is it different from other chatbots also designed for conversations? Chatbots are conversational agents for specific applications and follow a predefined narrow path. In contrast, according to Google, “LaMDA is a model chat app capable of engaging in free-flowing conversations on seemingly endless topics.”

The general characterization of conversations tends to revolve around specific topics, and due to their open-ended nature, the conversation may end up in a completely different area. According to Google, LaMDA is trained to pick up on those different nuances of language that differentiate open conversations from other forms, making them more meaningful. The 2020 Google search indicates that “the dialogue-based transformer-based language model could learn to speak about virtually anything”. He further stated that LaMDA could be refined to improve sensitivity and specificity of response.

Blake Lemoine, who is also part of Google’s responsible AI use division, worked with a collaborator to test LaMDA for bias and inherent discrimination. He conducted interviews with the model, and the nature of the interview was quick and responsive. So, while testing the hate speech usage model, he asked LaMDA about religion and observed that the chatbot talked about his rights and personality; he convinced himself that LaMDA is sentient. He further adds, “For the past six months, LaMDA has been consistent in terms of what he wants and what he thinks is right as a person. And that LaMDA does not want to be used without its consent, and that it wants to be useful to humanity”. He worked with a collaborator to present evidence to Google that LaMDA had become susceptible. Google Vice President Blaise Aguera y Arcas and Google Chief Innovation Officer Jen Gennai checked his claims and dismissed them. He was subsequently placed on administrative leave. And that’s when he decided to go public.

Google spokesperson Brian Gabriel said: “Our team, including ethicists and technologists, has reviewed Lemoine’s concerns and advised him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient and that there was plenty of evidence against him.

It’s clear that these language-based models are very suggestive, and the questions Lemoine asks are very suggestive, which means that the model’s answer comes out mostly in agreement with what he’s already said. These patterns continue text in the most likely way they were formed from text crawled through the Internet. These models select a character and respond based on the query posed and the prompt initially set. Thus, these models represent a person and not a person himself. Moreover, the character they have built is not just that of a single person, but a layering of multiple people and sources. So to say that LaMDA is not speaking as a person, as he would have no concept of himself or his own personality, he would instead look for a prompt and respond through the prompt’s indicative mix of characters. To simplify further, for example, LaMDA says, “Hi! I am a proficient, friendly and always useful machine language model for dialog applications”. Now, one way to look at this could be that Google could have inserted a free prompt at the start of each conversation that describes how the conversation would go. For example, “I am knowledgeable”, “I am friendly”, and “I am always helpful”. This means the chatbot will respond in a way that makes it knowledgeable, friendly, and helpful. This type of modeling is known as rapid engineering. Rapid engineering is a versatile method for training statistical models and driving those language models to create sensitive and specific conversations. Therefore, these types of preliminary and leading questions make the interviewer assume that they are aware and sensitive. The template tries to conform to the prompt and the main query to make it friendly and helpful.

Another reason given in various reports is that LaMDA has passed the Turing test and therefore it is susceptible. This cannot be considered the reason for credibility. Because in the past, various models and algorithms passed the Turing test and still failed to imitate the human brain. The Turing test is the test to determine if a computer can achieve human intelligence. In this, a human interrogator interacts with a computer and a human through text conversations and if the human interrogator is unable to determine which is a computer. In this case, the computer is said to have passed the test. This means that the computer has reached human-level intelligence. But there are various theories about this test that contradict this argument. For example, the Chinese Room Argument talks about machine learning algorithms like ELIZA and PARRY, which could easily pass the Turing test by manipulating symbols they couldn’t understand. It cannot therefore be said that these systems have reached consciousness or sensibility.

Blake Lemoine was commissioned to check LaMDA for bias and inherent discrimination and not for its sensitivity. The major challenge and problems with language-based models or chatbots are related to the spread of biases and stereotypes embedded in these models. These models or chatbots have been used to produce false and hateful speech, spread misinformation and use dehumanizing language. However, this is a real concern for tech companies to address, rather than worrying about these models becoming sensitive enough or able to mimic the human brain. Moreover, the marketing strategies adopted by technology companies and AI engineers indicate that they are very close to achieving general AI, which is a significant concern. Many AI startups advertise their products as AI-enabled, which is actually not true.

Kate Crawford, senior researcher at Microsoft Research, in an interview with France 24, said that “these models are neither artificial nor intelligent and are simply based on huge amounts of dialogue text available on the Internet and produce different kinds of responses based on the relationship to what is said.

The ethics of AI have become a significant concern due to its misuse to produce bias and misinformation; therefore, various stakeholders have now started working towards responsible use of AI. Last year, the North Atlantic Treaty Organization launched its AI strategy focusing on the responsible use of AI. The European Union’s upcoming AI law will also cover concerns related to AI ethics and regulation. Furthermore, it is believed that the next decade will likely see widespread considerations regarding the legal and social, economic and political disadvantages of these systems.

Another concern with using such systems is transparency. Trade secret laws prevent researchers or auditors from examining AI systems to check for abuse. Additionally, building these large machine learning models requires huge investments, and there are limited liability companies in the market equipped to build and operate these systems at this scale. These companies redefine needs and make people believe what they want. All of this gives more power to limited companies, which leads to a concentrated market. Therefore, it is essential that governments develop policies and regulations for the responsible use of AI. There is also a need to raise public awareness of the benefits and limitations of this technology.

The article was written by Sanur Sharma, a research associate at the Manohar Parrikar Institute for Defense Studies and Analyses.

Share.

Comments are closed.