Framing Right Testing Strategy to Avoid Challenges of Unethical AI

Framing Right Testing Strategy to Avoid Challenges of Unethical AI

The benefits of artificial intelligence are flourishing across several industries and finding its way to all kinds of technical aspects. From education to manufacturing the technology has served every sector for better while introducing various innovations across its verticals. But, as experts fear, the broader AI use becomes, the higher the risk of "AI gone wrong" which means the algorithms can evolve on their own to make unintended decisions.

In a recent blog for Forrester, Vice President and Principal Analyst Diego Lo Giudice discussed the expansion of artificial intelligence and the increased need for checks and balances. However, testing AI is not as simple as testing traditional software and as Lo Giudice puts it, how can one test something when they don't know the desired or anticipated outcome. It is highly recommended that companies deploying a wide array of AI-infused applications should prioritize testing of the AI-infused applications that present the highest risks.

According to Diego, when it comes to the actual work of testing, there is good news and bad news. The good news is that there are some frameworks for testing AI emerging, and large tech firms are developing platforms for artificial intelligence delivery that include testing. And the bad news is that for AI, testing doesn't end when the software is deployed — in fact, it never ends. Amid this, it becomes crucial to continuously monitor and test the model in production to determine if it's "drifting" from the original intent.

Diego considers that now is the right moment to talk about this because there is still time to develop methods and protocols for artificial intelligence testing before too many stories of "AI gone wrong" erode trust in the technology.

The Right Testing Strategy for AI Systems

According to an Infosys report, given the fact that there are several failure points, the test strategy for any AI system must be carefully structured to mitigate the risk of failure. To begin with, organizations must first understand the various stages in an artificial intelligence framework. With such understanding, they will be able to define a comprehensive test strategy with specific testing techniques across the entire framework. Here are some key AI use cases that must be tested to ensure proper system functioning:

•  Testing standalone cognitive features such as NLP, speech recognition and optical character recognition (OCR).

•  Testing artificial intelligence platforms such as IBM Watson, Infosys NIA, Azure Machine Learning Studio, Microsoft Oxford, and Google DeepMind.

•  Testing ML-based analytical models.

•  Testing AI-powered solutions such as virtual assistants and RPA.

As noted by Infosys, artificial intelligence frameworks typically follow 5 stages – learning from various data sources, input data conditioning, machine learning and analytics, visualization and feedback. Each stage has specific failure points that can be identified using the aforementioned techniques. Thus when testing AI systems, QA departments must clearly define the test strategy by considering the various challenges and failure points across all stages. Such a comprehensive test strategy will help organizations streamline their artificial intelligence frameworks and minimize failures, thereby improving output quality and accuracy.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net