CIA Collaborates with Data Scientists to Combat Bias for Ethical Deployment of AI

by June 7, 2019 0 comments

Artificial Intelligence

Following the acceleration in the adoption of AI and ML, the Central Intelligence Agency (CIA) has become keen towards fulfilling the primary mission of the technologies. Along with the much-highlighted developments in the field, the agency is also looking into the details of biases and ethical challenges assigned to its emergence.

Around 100 of AI initiatives are running under the hood of the agency and ethical deployment seems to be the most complicated issue addressed by it.

The privacy team also collaborates with the team of data scientists on several projects revolving around statistics, coding and graphical representation. The team of data scientists also look into the analytical aspect of large datasets to gather the information that the CIA alone is not able to extract. They also tend to make improvements in ML to analyze insights as to the human mind.

The agency officers believe that the current generation looks at AI as something which is so differentially functional and can be used in a lot of places. They are also worried about the negative faces of these innovative technologies specifically bias and explainability.

The privacy and civil liberties officer of CIA, Benjamin Huebner said, “One of the interesting things about machine learning, which is an aspect of our division of intelligence, is experts found in many cases the analytics that have the most accurate results, also have the least explainability—the least able to explain how the algorithm actually got to the answer it did. The algorithm that’s pushing that data out is a black box and that’s a problem if you are the CIA.”

Additionally, the CIA cannot rest with being accurate, it also has to be able to demonstrate the process through the end result. So, if in case any analytic is not explainable then it is considered as not ‘decision-ready’.

Also, both the teams are working on mitigating bias issue in AI implemented by CIA. These biases can seep into the process while training data or training ML analytics. The biased data at times can be useful for training purpose but it might include private information without relevance of any foreign intelligence. The agency along with data scientists’ team are working on balancing the use of appropriate data to train machines along with maintaining the grip of solid privacy measures.

Presently, the CIA office is trying to develop practical rules and regulations for insiders to employ while operating on new projects to reduce the threats of privacy, explainability, and bias.

Benjamin Huebner also stated that, “It’s great that people are using the technology in the commercial space, but we are not pushing you to a better brand of coffee here—we need more accuracy and we need to know how you got there.”

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.