Is it Time to Hold Artificial Intelligence Systems Responsible for its Failures?

Is it Time to Hold Artificial Intelligence Systems Responsible for its Failures?

Unintended biases triggered by correct but context-agnostic data or processes result in incorrect results.

In March 2022, five EU Justice and Home Affairs Agencies collaborated together with researchers to create the world-first 'AI accountability framework' to guide the deployment of AI tools by security practitioners. It is adopted to address major ethical issues regarding the use of Artificial Intelligence, particularly with Government agencies like law enforcement. Ethical issues of Artificial Intelligence have been in the spotlight for quite a long time for the threat it may pose not as a robot weapon but as an existential threat to humanity in their everyday lives. When it comes to artificial intelligence, regulation and liability are two sides of the same coin. While regulation pertains to ensuring the safety of the public, liability is about holding someone responsible.

When it seems totally sensible to hold Artificial Intelligence responsible for accidents that have occurred or have the possibility to occur, such as a self-driving car crashing into a pedestrian, or an AI-enabled diagnostic service delivering wrong results, who should be bearing the consequences resulting from a hierarchical framework. AI systems are not the products of one but multiple players. Who can you hold responsible when a system goes wrong, the designer, manufacturer, programmer, or data provider? Can you sue a user if an Artificial Intelligence system goes wrong when in use for not following the instructions, even if the provider communicated the limitations to the purchaser? These are a few tricky questions AI governance should seriously take into consideration.

AI accountability comes in layers, viz. functional performance, the data it uses, and how the system is put to use. The functional AI system that most organizations use is programmed using machine learning and natural language processing to analyse data and make decisions. The way Microsoft's Artificial Intelligence chatbot is corrupted by Twitter trolls is a classic example. Tay was an AI chatbot created by Microsoft to engage in "casual and playful conversation" with Twitter. Microsoft claimed, with time Tay would learn more engaging and natural conversations. In less than 24 hours of launch, the internet trolls corrupted the bot with racist, misogynist, and antisemitic tweets which perhaps Microsoft didn't expect would happen when it ignored to train its bot for "slang".

Secondly, as the saying goes, any AI system is as good as the data it is fed. When the need for quality and reliable data is overlooked while designing an Artificial Intelligence system, particularly for tasks that hinge on taking decisions based on hypothetical situations, the systems fail to deliver the desired results. AI technologies designed for healthcare apparently have too many stakes to ignore this factor. For example, in 2018, Stat News reported that the internal company documents from IBM show that medical experts working with the company's Watson supercomputer found "multiple examples of unsafe and incorrect treatment recommendations". But later it was found that the anomaly was due to inadequate data as the engineers trained the systems with data from hypothetical cancer patients instead of using real patients' data.

Lastly, unintended bias triggered by correct but context-agnostic data or processes results in incorrect results. Something which has worked some years ago may not give a solution suitable for current circumstances. Many developers underestimate unintended AI biases while training the AI systems, though with technically correct data. Amazon's recruitment system which used years of applicants' data ended up hiring male candidates. Reason: Their systems received inputs of data from a period when most of the recruiters were male. And in a more serious case, the American Justice system used a biased Artificial Intelligence system that gave black defendants a higher risk score, influencing sentencing in several states.

The eagerness among the government agencies to regulate Artificial Intelligence is understandable. However, considering the infancy and narrow penetration of technology, there is a chance that the regulations can scuttle the creative spirit of developers. Responding to proposals by EU, to regulate AI and robotics in 2017, Intel CEO Brain Krzanich has argued that AI was in its infancy and it was too early to regulate the technology. Some scholars suggested developing common norms including requirements for the testing and transparency of algorithms which would not stifle the existing research and development.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net