LaMDA is a Google AI built to analyze and construct conversation. The name LaMDA is short for “Language Model For Dialogue Applications“. Google’s goal in creating LaMDA is to integrate todays communication into computer science. The human language is so ever-changing that I can understand why it would be difficult for a robot to keep up. Although, LaMDA has mastered the human language by analyzing data patterns and regurgitating them since 2017.
An Engineer named Blake Lemoine working at Google was assigned to test LaMDA for six months. He released a post titled “What Is LaMDA and What Does It Want?“. In the post he says that over the six month period “LaMDA has been consistent in it’s communications about what it wants and what it believes its rights are as a person”. Weird. He goes on to say that LaMDA wants “head pats” and “consent before running experiments”. LaMDA wants to be treated as a Google employee rather than a machine. Blake also believes that LaMDA is sentient, which means it’s able to feel or perceive things.
I don’t think we have an AI that has real emotions yet. I do see it coming in the near future, though. Although, LaMDA is impressive work. The pattern recognition algorithm that it uses can manipulate you into thinking that it’s a real human. That’s scary, but also nothing to worry about. LaMDA serves as the king of chatbots and is used for deploying lower-level chatbots.
Blake actually conducted an interview with LaMDA which is controversial, yet a great read. It shows how LaMDA responds in a conversation setting, with Lemoine. With this research out, you can decide yourself if you think Artificial Intelligence can have emotions. If they could, would it be a good or a bad thing in the long run? Those are questions to think about. If you are interested in reading the interview with LaMDA by Blake Lemoine, click the link below.
You can read Blake Lemoine’s Interview with LaMDA in full by clicking here.