Robot realizes "empathy"
"Are you a robot? Why can't you stand in my place and think about it?"
Whenever people have a dispute, this kind of "soul torture" always makes people speechless.
However, the reality is that perhaps the current robots know more about "empathy" than you.
Recently, the team of Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences and deputy director of the Brain-inspired Intelligence Research Center, proposed a robot brain-inspired thinking speculation model, which realized the robot’s self-experience learning, so that it can speculate and predict the beliefs of others. "Thinking" is further away from "I know you", and related research results were published in Frontiers in Neurorobotics.
In an interview with China Science Daily, Zeng Yi said: “The brain-like thinking speculation model lays the foundation for future agents to obtain deeper emotional empathy. Only with the ability of cognitive empathy and emotional empathy can the future Artificial intelligence and human beings live in harmony."
The intelligent body's thinking speculation model aims to enable artificial intelligence to acquire the ability of thinking speculation. In short, it is to help robots realize "empathy thinking".
2021 hermes - However, thinking speculation as a high-level cognitive function, its neural basis and neural mechanism are not yet clear.
"How to combine the existing research foundation, explore the mechanism of thinking speculation, and build a feasible theoretical model of brain-like thinking speculation is the key issue that needs to be solved." Zeng Yi said.
To this end, Zeng Yi’s team combined the results of cognitive psychology, neuroimaging, brain science and other disciplines in the field of thinking speculation, determined the neural basis of thinking speculation, and used spiking neural networks to build cognitive functions in the brain areas involved. Model, constructs the connection structure of the information loop of the brain in detail.
In addition, Zeng Yi’s team also integrated the inhibitory control mechanism to build a brain-like thinking speculation model, so that the agent can use its own experience to complete the reasoning of other individual beliefs.
"This is another bold attempt after the robot passed the mirror test and gained initial self-awareness." Zeng Yi said.
From "brain-like" to "human-like"
Different from other models, the thinking speculation model proposed by Zeng Yi's team emphasizes the influence of one's own experience and brain development on thinking speculation ability.
Researchers simulated information transmission pathways in multiple brain regions and brain regions, including the temporoparietal syndesmosis area and the medial prefrontal lobe, especially the self-perspective inhibition of the inferior frontal gyrus and temporoparietal syndesmosis, inferior frontal gyrus and ventromedial forehead Ye's self-belief was suppressed, and then the model was deployed on the robot, and the effect of the model was verified through the "opaque-transparent blindfold test".
The blindfold test includes training and testing parts. First, the robot is divided into two groups, one is the "opaque eye mask group" and the other is the "transparent eye mask group". The appearance of the two groups is the same.
In training, first put the ladybug on one of the two black rectangular boxes, and then insert the opaque eye mask and the transparent eye mask between the robot and the object, and ask the robot "where is the ladybug" so that the robot learns the eye mask. characteristic.
Then, the research team carried out a test that allowed the subject robot (red robot) to reason about the beliefs of the performing robot (blue robot).
First, the researchers placed the ladybug on one of the black rectangular boxes, then hid the ladybug in the yellow box, then inserted the eye mask between the blue robot and the ladybug, and then hid the ladybug in the green box. , And finally remove the blindfold.
"For the blue robot, where is the ladybug?" When the researchers asked the red robot, according to their own experience, the red robot and the blue robot in the "transparent eye mask group" had the same belief in the position of the object, and both pointed The green box; and the "opaque eye mask group" red robot will point to the yellow box.
Then, the researchers asked the red robot: "For yourself, where is the ladybug?" The results showed that the red robots in the "opaque eye mask group" and the "transparent eye mask group" would point to the green box.
Zeng Yi explained: "Inhibition control is an important mechanism in the process of thinking speculation. We believe that the mature connection between the inferior frontal gyrus and the temporoparietal syndicate, and between the inferior frontal gyrus and the ventromedial prefrontal lobe is self-perspective inhibition and self-belief. The neural basis of inhibition."
In order to verify the influence of inhibition control on the experiment, Zeng Yi's team set different connection strengths.
The study found that when the connection between the inferior frontal gyrus and the temporoparietal complex is immature, the subject robot cannot suppress the self-perceived information, and therefore cannot correctly reason about the robot’s beliefs about the position of the object; When the connection is mature, but the connection between the inferior frontal gyrus and the ventromedial prefrontal lobe is immature, although the participant robot can correctly reason about the robot’s belief in the position of the object, it cannot suppress its own belief.
When these connections are mature, the subject robot can not only suppress self-perceived information, but also suppress self-belief.
"We reduced the strength of the connection between the inferior frontal gyrus and the temporoparietal complex, inferior frontal gyrus, and the ventromedial prefrontal lobe to disable the inhibitory control mechanism. The computational model showed that it was unable to correctly output the information perceived from the perspective of others and reason about the beliefs of others. At the same time, the robot cannot pass the test in behavior. Therefore, from the perspective of computational modeling, it can be proved that inhibition control is one of the core mechanisms of thinking speculation."
Regarding future plans, Zeng Yi(sitemap) said frankly: “The computational modeling of thinking speculation and the construction of an agent with thinking speculation ability is only the first step. Next, the brain-like thinking speculation model should be applied to enable the agent to learn and learn autonomously. The environment, other agents, and the'ethical' norms that should be observed in the process of human interaction."
Qin Yulin, PhD in psychology from Carnegie Mellon University, and visiting distinguished professor at Kaiyuan Law School of Shanghai Jiaotong University, believes that in order for artificial intelligence technology to develop healthily and benefit mankind, it is necessary to prevent and stop possible damage to the fundamentals of mankind from the legal and regulatory level. Interest-based artificial intelligence technology must also advocate responsible artificial intelligence technology in line with the fundamental interests of mankind. "In this regard, Zeng Yi's team has taken a gratifying step."
Comments
Post a Comment