Need of Strong Governance in the Field of Artificial Intelligence

by April 17, 2018 0 comments

The development, application, and capabilities of AI-based systems are evolving rapidly, leaving largely unanswered a broad range of important short and long-term questions related to the social impact, governance, and ethical implementations of these technologies and practices.

Many sectors of society rapidly adopt digital technologies and big data, resulting in the quiet and often seamless integration of AI, autonomous systems, and algorithmic decision-making into billions of human lives. AI and algorithmic systems already guide a vast array of decisions in both private and public sectors.

For example, private global platforms, such as Google and Facebook, use AI-based filtering algorithms to control access to information.

AI can use this data for manipulation, biases, social discrimination and property rights. Humans are unable to understand, explain or predict AI’s inner workings. This is a cause for rising concern in situations where AI is trusted to make important decisions that affect our lives. This calls for more transparency and accountability of Artificial Intelligence and the need for Governance of AI.

 

Threats AI Applications Are Posing

Job Threats– Automation has been eating away at manufacturing jobs for decades. AI has accelerated this process dramatically and propagated it to other domains previously imagined to remain indefinitely in the monopoly of human intelligence.

From driving trucks to writing news and performing recruitment tasks, AI algorithms are threatening middle-class jobs like never before. They might set their eyes on other areas as well, such as replacing doctors, lawyers, writers, painters, etc. 

Responsibility- Who’s to blame when a software or hardware malfunctions? Before AI, it was relatively easy to determine whether an incident was the result of the actions of a user, developer or manufacturer.

But in the era of AI-driven technologies, the lines are blurred. This can become an issue when AI algorithms start making critical decisions such as when a self-driving car has to choose between the life of a passenger and a pedestrian. Other conceivable scenarios where determining culpability and accountability will become difficult, such as when an AI-driven drug infusion system or robotic surgery machine harms a patient. 

Data Privacy– In the hunt for more and more data, companies may trek into uncharted territory and cross privacy boundaries. Recently we have seen how Facebook harvested the personal data over a period of time and used it in a way which leads to privacy violation.

Such was the case of a retail store that found out about a teenage girl’s secret pregnancy. Another case, UK National Health Service’s patient data sharing program with Google’s DeepMind, a move that was supposedly aimed at improving disease prediction.

There’s also the issue of bad actors, of both governmental and non-governmental nature, that might put AI and ML to ill use. A very effective Russian face recognition app rolled out proved to be a potential tool for oppressive regimes seeking to identify and crackdown on dissidents and protestors. 

Biased Decisions / Social Discrimination – It has been proven on several accounts in the past several years, AI can be just as or even more biased than human beings. Black box machine-learning models are already having a major impact on people’s lives.

The problem is, if the information trainers feed to these algorithms is unbalanced, the system will eventually adopt the covert and overt biases that those data sets contain. And at present, the AI industry is suffering from diversity troubles that some label the “white guy problem,” or largely dominated by white males.

This is the reason why an AI-judged beauty contest turned out to award mostly white candidates, a name-ranking algorithm ended up favoring white-sounding names, and advertising algorithms preferred to show high-paying job ads to male visitors.

A system called COMPAS, made by a company called North pointe, offers to predict defendants’ likelihood of reoffending and is used by some judges to determine whether an inmate is granted parole. The workings of COMPAS are kept secret, but an investigation found evidence that the model may be biased against minorities. 

Technological Arms Race- Innovations in weaponized artificial intelligence have already taken many forms. The technology is used in the complex metrics that allow cruise missiles and drones to find targets hundreds of miles away, as well as the systems deployed to detect and counter them.

Algorithms good at searching holiday photos can be repurposed to scour spy satellite imagery, for example, while the control software needed for an autonomous minivan is much like that required for a driverless tank. Many recent advances in developing and deploying artificial intelligence emerged from research from companies such as Google.

Google has long been associated with the corporate motto “Don’t be evil”. But recently Google confirmed that it is providing the US military with artificial intelligence technology that interprets video imagery as part of Project Maven. According to experts, the technology could be used to better pinpoint bombing targets. This may lead to any autonomous weapons systems, the kind of robotic killing machines.

To what extent can AI systems be designed and operated to reflect human values such as fairness, accountability, and transparency and avoid inequalities and biases? As AI-based systems are now involved in making decisions for instance, in the case of autonomous weapons. How much human control is necessary or required? Who bears responsibility for the AI-based outputs?

To ensure transparency, accountability, and explainability for the AI ecosystem, our governments, civil society, the private sector, and academia must be at the table to discuss governance mechanisms that minimize the risks and possible downsides of AI and autonomous systems while harnessing the full potential of this technology. The process of designing a governance ecosystem for AI, autonomous systems, and algorithms is certainly complex but not impossible.

 

Holding the Reins On AI

When the boundaries of responsibility are blurred between the user, developer, and data trainer, every involved party can lay the blame on someone else. Therefore, new regulations must be put in place to clearly predict and address legal issues that will surround AI in the near future.

This can be achieved by promoting transparency and openness in algorithmic datasets. Shared data repositories that are not owned by any single entity and can be vetted and audited by independent bodies can help move toward this goal.

Companies developing and using AI technology should regulate their information collection and sharing practices and take necessary steps to protect user data. The use and availability of the technology must also be revised and regulated in a way to prevent or minimize ill use.

The journey of AI has just begun. We are still on the ground floor of a tall AI building. It is still in the development stages. A global AI governance system needs to be flexible enough to accommodate cultural differences and bridge gaps across different national legal systems.

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.