Do robots have to b...

Do robots have to be afraid of dying ?  

Active Member

After exploring the reasons for anthropomorphism in robotics, Numerama is now interested in the emotions of robots. This is the subject of a research paper by two neuroscientists, who propose to integrate homeostasis into machines... instil the desire to survive.

"Today's robots lack emotion," said Kingson Man and Antonio Damasio, in a research article published in Nature Machine Intelligence in October 2019. It is clear to them that this absence in robotic programming is a hindrance to the emergence of an artificial intelligence for which the word "intelligence" would really make sense. But why would feelings be so important for robots to reach a higher stage of consciousness and sentience ?

You've probably seen videos of Boston Dynamics: you can see humanoid robots, or dog robots, which are particularly powerful. This is an aspect that we have recently explored in Numerama : a writer from Battlestar Galactica and a professor of Computer Science explained to us the reasons and the scope of robotic anthropomorphism. Similar videos from Boston Dynamics, but parodied, were released by a collective called Corridor, under the pseudo-name Bosstown Dynamics. We see humanoid robots that are just as effective in computer graphics.

One of their most viral videos, above, develops an interesting question, beyond the parody: we see a robot being "mistreated" by humans, then, at the end, the machine ends up counter-attacking. The video was all the more successful because not everyone immediately noticed that it came from a parody account, even if it meant compassion for the robot in the first few minutes. In any case, this video shows a robot who, under the blow of suffering, ends up "waking up" as if taken by an instinct of survival. For Kingson Man and Antonio Damasio, it is precisely by integrating such a desire to survive that one could bring feelings to the robots.

Homeostasis: the key to true "INTELLIGENCE"?
The idea of the two neuroscientists is that for there to be true intelligence — even artificial intelligence — there must be a purpose. They then believe that if machines incorporate the notion of a peril to their own existence, then emotions will naturally emerge to guide their decisions given this higher goal : to survive.

To successfully integrate such a goal into AI programming, researchers suggest that homoeostasis be the model. This fundamental biological principle allows a life form to self-regulate permanently to achieve the balance that best suits it. A thermostat used to regulate the temperature of a room is a typical example, but many living organisms do so. That's what drives you to know when to drink and eat to survive. In neuroscience, there are hypotheses that homeostasis is the key to self-awareness (this is the subject of a book written... by one of the study's co-authors, Antonio Damasio).

Maintaining a robot in a assisted state, where humans make sure of the electric current, the right wiring, the right temperature is the reverse of homeostasis. Both neuroscientists believe that emotions cannot emerge because, in this context, they are simply not necessary. "The desire to survive and to maintain a certain stability of identity is a necessary factor of sentience, yes, without doubt," comments, for Numerama, Sylvie Lainé, autrice of science fiction and professor of information and communication sciences. But homeostasis is not necessarily the miracle solution put forward by these researchers. "To say that it would be a sufficient factor... I don't think it's that simple, alas. »

The existence or not of emotions in robots poses, according to Sylvie Lainé, far too many intractable problems, especially because of the initial programming of source code by humans. This is the famous question of machine imitation : how to differentiate between an imitated emotion and a real emotion ?

If we take the example of the emotion of pleasure, and imagine that a robot would have integrated it, then it " would seek, like us, to maximize its pleasure. But what would make him happy ? Where could this pleasure come from ? Would it have been programmed to get pleasure under certain circumstances ? Which ones ? I can't answer that without asking a lot of questions..." Moreover, the two neuroscientists who initiated the study remain themselves rather cautious : in their research article, they evoke "equivalents" to emotions.

Kingson Man and Antonio Damasio write that homeostasis, and the pseudo-emotions it can generate in machines, would have the ability to bring them a little closer to true intelligence. Nowadays, robots are essentially pre-programmed to perform specific tasks. If some A. I. S can beat humans on games like Starcraft II, it is necessary to relativize this success : such a machine can do nothing else. Just as this famous robotic hand able to solve a Rubik's cube was coded in advance get to accomplish this precise task.

Both neuroscientists argue that these AI'S are not really "intelligence" like humans, since we are able to innovate in all kinds of situations. Our intelligence is multiple, complex, autonomous. Yet, according to researchers, almost all the ingredients are there to incorporate homeostasis and "free" robots from human control : soft robotics and deep learning, emerging innovations that allow components to be flexible and AI to learn on its own. As a result, combining these two elements would allow robots not only to understand their environment, but also to monitor their own internal state. It would bring all the complexity, autonomy and self-awareness that intelligence needs.


Posted : February 2, 2020 7:54 pm