Human-Like Robots Are Often Overestimated to Be Thinkable: Research

Human-Like Robots Are Often Overestimated to Be Thinkable: Research

The experiment proved that humans when exposed to even short experiences with humanoid robot induces a "like me" impression

Humans are not done yet with foraying into new realms of artificial intelligence. They have every technology at their dispense. With voice assistants like SIRI and Alexa which have answers to every weird and random question or the self-driving cars which do not need assistance, seems we have reached the limits of technology. Or, do we think so? The recent debate around the possibility of a chatbot like LAMDA gaining sentience is more proof that this question will be eternal. To add to this argument, the study published by American Psychological Association proves that humans exposed to robots that display human-like traits make them believe robots are capable of thinking and acting on their own beliefs and desires rather than on their programs. What does this research hold for understanding the extent to which a robot can influence human lives?

Agnieszka Wykowska, Ph.D., a principal investigator at the Italian Institute of Technology and lead author of this study, said, the experiment was intended to study, in a series of three experiments, if humans would adopt an intentional stance when allowed to interact with a human-like machine vis-à-vis robots having machine-like behavior. The experiment proved that humans when exposed to even short experiences with humanoid robot induces a "like me" impression. The experiments included close human-robot interactions through socializing activities. Around 119 participants were studied, through three experiments, how their perception of the human-like robot iCub changed before and after.

The researchers remotely controlled iCub in the first two experiments to make it sound more affable, and friendly, and made it ask random questions. The robot, equipped with cameras in the eyes, could make eye-to-eye contact with participants recognizing their faces clearly. They watched videos together during which the robot would react with natural sounds, and make faces with expressions of different emotions. In the third experiment, the participants were treated to a rather machine-like iCub using similar activities. With deactivated cameras, iCub couldn't make an eye-contact or recognize faces. The robot responded only in beeps and aimless body movements unlike in the previous experiment. Instead of wishing people it blurted out only recorded sentences, about its calibration process. The participants who got to interact with a human-like robot rated its action as more intentional than the people who were exposed to a machine-like robot. Agnieszka Wykowska concluded through these experiments that it is not enough for humans to interact with robots to consider them human, but the human-like behavior that counts, for them to be counted as bots with intentions. She was hopeful of the prospect of her findings being used in designing robots in the future, particularly in determining contexts in which social bonding and defining intentionality are necessary. She says, "Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication."  In earlier research published in 2021, titled "Robots as the Mirrors of Human Minds", underlining the role that robots can play as a tool for understanding human cognition, she says, "Robots can inform us about our cognitive mechanisms or, in the role of embodied computational models, can generate new theoretical predictions regarding the workings of the human brain." This implies robots if designed in their right contextual framework, can open the ways to finding the optimum point for a robot's thinking ability.

More Trending Stories 

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net