News Robotic “consciousness” was once taboo. Now for the last word.

But is it really conscious?
The risk of committing to any theory of consciousness is that doing so opens up the possibility of criticism. Sure, self-awareness seems important, but aren’t there other key features of consciousness? Can we call something conscious if we don’t feel it?
Dr. Chella believes that consciousness cannot exist without language and has been developing robots that can form internal monologues, reason themselves and reflect on their surroundings. One of his robots was recently able to recognize itself in a mirror, passing perhaps the most famous test of self-awareness in animals.
Joshua Bongard, a roboticist at the University of Vermont and former member of the Creative Machines Lab, argues that consciousness not only includes cognitive and mental activity, but also has an intrinsically physical aspect. He developed a creature called a xenobot, made entirely of frog cells wired together so that programmers could control them like machines. According to Dr. Bongard, it’s not just humans and animals that have evolved to adapt to their surroundings and communicate with each other; our tissues have evolved to support these functions, and our cells have evolved to support our tissues. “We are smart machines, made of smart machines, made of smart machines, until now,” he said.
This summer, around the same time Dr. Lipson and Dr. Chen unveiled their latest robot, a Google engineer claimed that the company’s newly improved chatbot, LaMDA, was sentient and should be treated like a small child. This claim has been challenged, mainly because, as Dr. Lipson points out, chatbots are dealing with “code written to accomplish tasks”. Other researchers say there are no underlying structures of consciousness, only the illusion of consciousness. Dr Lipson added: “The robot has no self-awareness. It’s a bit like cheating.”
But with so much disagreement, who’s to say what counts as cheating?
🦾🦾🦾
Eric Schwitzgebel, a philosophy professor at the University of California, Riverside, who has written about artificial consciousness, says the problem with this general uncertainty is that, at the rate things are going, humans may develop what many believe to be artificial consciousness. Robots will be conscious before we can agree on a standard for consciousness. Should robots be granted rights when this happens? free? Should it be programmed to feel happy while serving us? Will it be allowed to speak for itself? vote?
(Questions like this have inspired an entire subgenre of science fiction, in the writings of writers like Isaac Asimov and Kazuo Ishiguro, and in TV shows like Westworld and Black Mirror.)