Ethics in Robotics: Will Robots Run in Line with Human Values?

Students demonstrate the Mini Cheetah, a quadruped robot, during presentations to celebrate the new MIT Stephen A. Schwarzman College of Computing at the Massachusetts Institute of Technology in Cambridge Feb. 26, 2019. Speakers at a Feb. 25-27, 2019, Vatican meeting said the field of robotics needs ethical guidelines. (CNS photo/Brian Snyder, Reuters) See ROBOTS-LIFE-ACADEMY March 1, 2019.
Students demonstrate the Mini Cheetah, a quadruped robot, during presentations to celebrate the new MIT Stephen A. Schwarzman College of Computing at the Massachusetts Institute of Technology in Cambridge Feb. 26, 2019. Speakers at a Feb. 25-27, 2019, Vatican meeting said the field of robotics needs ethical guidelines. (CNS photo/Brian Snyder, Reuters) See ROBOTS-LIFE-ACADEMY March 1, 2019.

Robotics revolution today is growing at a rapid pace, as tech companies and businesses are increasingly moving towards more automation, especially autonomous systems like robots. It is estimated that robots and machines powered by Artificial Intelligence will perform half of all productive functions in the workplace by 2025. But this rising scale is likely to raise some new moral and legal questions. Are robots running in line with humans' values? Who will be accountable when or if an autonomous system malfunctions or hurts humans?

In this regard, American writer Isaac Asimov pioneered the Three Laws of Robotics in the 1940s, arguing the moral behaviours of intelligent robots – Robots may not injure a human being, or, through inaction, enable a human being to come to harm; A robot must follow the orders given it by humans without violating the first law; Robots must protect its own existence as long as such protection does not conflict with the both laws.

Recently, the British Standards Institute issued a document, titled "BS8611: Robots and Robotic Devices", which offered more information and is intended for the developers of robots to ensure their machines behave ethically. The document suggests that "Robots should not be designed solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behavior."

At the present scenario, researchers are aiming to foster the design and deployment of artificial systems with embedded morally acceptable behaviour.

Roboethics – A Code of Conduct to Robotics 

As more advanced robots come into the view, ethical issues are going to be on the rise. Roboethics deals with the code of conduct where robotic designers and engineers must implement ethics in Artificial Intelligence or a robot. Through this code of conduct, roboticists must ensure that autonomous systems will be able to demonstrate ethically acceptable behavior in situations where robots or any other autonomous systems like self-driving vehicles interact with humans.

As robots become increasingly autonomous and artificial intelligence in several ways outperforms humans, the need for Robot Ethics standards becomes more pressing. It will also become more significant as we enter an era where more sophisticated and advanced robots besides Artificial General Intelligence (AGI) are going to become an integral part of today's daily life.

However, much of the concern over the need for robot ethics increases, some argue that robots will contribute to creating a better world. Conversely, some assert that robots are incapable of being moral agents and shouldn't be designed with embedded moral-decision making capabilities.

Even, some futurists and technological experts including Elon Musk, Steve Wozniak and Steven Hawking have deeply expressed concerns over it that if uncontrolled, robots could lead to the extinction of humans. So, today there is a need where developers and designers of robots must be morally accountable for what they develop and present to the world.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net