Considering Liabilities of Artificial Intelligence While Technology Adoption

Considering Liabilities of Artificial Intelligence While Technology Adoption
Image Credit: gibsondunn.com
Image Credit: gibsondunn.com

As companies capitalize on artificial intelligence, new fields of legal responsibility emerge.

The advancement and adoption of artificial intelligence (AI) has surged in the last few years. Businesses across almost every industry are rushing to take advantage of capabilities AI offers, pouring massive capital towards it. The technology holds huge promises and expectations to drive efficiency and innovation throughout an organization. But when the adoption of this tech increases, their liabilities also emerge. Even programmers often do not aware of exactly how their AI will learn, adapt experiences, and how it comes at any decision. In this way, it would difficult to determine who should bear liability when things went wrong.

Undeniably, as AI will develop at a faster pace, human decision-making will fade into the background. In this context, it is certain that some AI systems will face failure while performing tasks. This is where we will see an increase in disputes caused by AI failures. Already, we have seen an autonomous car killed a woman in the street of Arizona. When first autonomous cars in 2013 began appearing in large numbers on public roadways, the primary goal of manufacturers since then has been to create a self-driving car system that is clearly and demonstrably safer than an average human-controlled car.

Artificial Intelligence and Legal Liability

There are numerous civil laws that provide plenty of opportunities to deal with risks posed by AI systems. For example, the UK proposes to introduce rules under which the insurer will generally bear primary liability in the case of accidents caused by autonomous vehicles. The European Parliament, On February 16, 2017, adopted a Resolution on Civil Law Rules on Robotics with recommendations to the Commission. This proposed a range of legislative and non-legislative initiatives for robotics and AI. Moreover, it asked the Commission to submit a proposal for a legislative instrument providing civil law rules on the liability of robots and AI.

While more complex uses of AI will rise, it will test the boundaries of existing laws and likely give pace to new examples of liability, even under current legal frameworks. The European Union Liability for Artificial Intelligence and other emerging digital technologies report noted that how liability regimes should be designed, and where necessary, changed, in order to rise to the challenges emerging digital technologies bring with them. Some guidelines include:

•  A person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation.

•  In situations where a service provider ensuring the necessary technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI, this should be taken into account in determining who primarily operates the technology.

•  A person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor and maintain the technology in use and – failing that – should be liable for breach of such duties if at fault.

•  A person using a technology which has a certain degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary.

•  Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer's control after it had been placed on the market, and more.

Companies Action Plans for AI Liability

With the proliferation of artificial intelligence and other digital technologies, companies across industries require their use of technologies is socially and legally acceptable. They need to have clear plans on how to react when algorithms misbehave and should be attentive in order to ensure the algorithms are operating as planned. Companies should also require to participate in the development of ethical and industry-standard AI systems or products. They must take a macro view to satisfy the objective of the AI they propose to use is reliable with good corporate behavior.

Moreover, AI algorithms that make decisions these days that can affect individuals' rights can also have consequences for a company's reputation even if legal obligations are not hindered. In this case, businesses must take into consideration of human rights law that is built concerning AI and legislation and ensure the ethical development of AI products. They should also ask basic questions that can assist them in understanding key areas of liability created by AI.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net