Who Should be Responsible when Robots and AI Cause Accidents

Who Should be Responsible when Robots and AI Cause Accidents

Who is responsible? – An inactive bug, presented by a recent software update or impacted by a human's choices in the functions paving the way to the accident

 Who should be considered lawfully responsible when a self-driving vehicle hits a walker? Should the finger be pointed at the car proprietor, manufacturers or the engineers of the artificial intelligence (AI) software that drives the vehicle?

The question of deciding 'risk' for decision making achieved by robots or artificial intelligence is an intriguing and significant subject as the usage of this innovation increases in the industry, and starts to all the more directly sway our everyday lives.

To be sure, as applications of Artificial Intelligence and machine learning innovation develops, we are probably going to observe how it changes the idea of work, organizations, businesses and society. But, in spite of the fact that it has the ability to disrupt and drive more prominent efficiencies, AI has its snags: the issue of 'who is at risk when something goes astray' being one of them.

Road traffic incidents are one of the main causes of accidental death and are one of the leading supporting contentions for why self-driving vehicles should be adopted more rapidly. Struggles to allocate risk for mishaps including cutting-edge advancements like AI have ruled worldwide conversations even as preliminaries are progressing in numerous places far and wide for example, in Singapore, South Korea and Europe.

During the third annual edition of the TechLaw. Fest forum held recently, experts said that Singapore's laws are presently incapable to adequately appoint risk on account of misfortunes or damage endured in accidents including AI or robotics technology.

The extraordinary capacity of autonomous robots and AI frameworks to work independently with no human contribution muddies the waters of liability," said Charles Lim, co-chair of the Singapore Academy of Law's Subcommittee on Robotics and Artificial Intelligence.

The 11-part Robotics and Artificial Intelligence Sub-panel had recently published its report on what should be possible to set up civil liability in such cases a month ago. Singapore, regardless of positioning first in the 2019 International Development Research Center's Government Artificial Intelligence Readiness Index, actually doesn't have law frameworks set up for overseeing liability stemming from robots or AI, including civil liability for autonomous vehicles (AVs).

"There are different factors (in play, for example, the AI framework's underlying software code, the information it was prepared on, and the external environment the framework is deployed in," he said. Lim is a legal advisor and co-chair of the Singapore Academy of Law's Subcommittee on Robotics and Artificial Intelligence.

The scenario is quite confusing whether an accident could come from an inactive bug, presented by a recent software update, or impacted by a human's choices in the functions paving the way to the accident. Following the exact sequence of events paving the way to the accident to demonstrate liability can be unpredictable and cost, says Lim, who talked in his personal capacity at the webinar.

Fellow panelist and robotics and AI sub-committee member Josh Lee believes that an initial step for lawmakers here could be to sort out what to call the individual in the driver's seat.

"We suggest that the individual be known as a 'user-in-charge', since the person may not be completing the task of driving, yet holds the capacity to take over when necessary," he said. "This can have applications past driverless vehicles in numerous situations today, for example, medical diagnosis."

The expression "user-in-charge" was first mooted by the United Kingdom Law Commission in 2018, where recommendations were postponed to place such users in a completely automated environment under a separate regulatory regime.

Artificial intelligence and robotics sub-committee member and lawyer Beverly Lim said during the webinar that the one individual who ought not to be held at risk is the driver in the driver's seat, since he would have purchased the vehicle figuring that the AI would be a comparatively more reliable driver.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net