Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution

Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution

There is a need for the global tech industry to eliminate AI bias in 2022

For a long time, technology has been promoted as 'neutral' and 'bias-free'. The dominant slogan so to say was "neural is neutral", and in course of time it metamorphosed into "virtual is neutral". But nothing in the world forever stays one-sided. With the advent of the most sophisticated genres and brands of technology, there has been a growing awareness of the tech bias. Take the case of AI. Arguably, the most cutting-edge kind has been consistently subject to criticisms about its bias. The tech developers and promoters in unison have responded to such criticisms and sought to counter them by arguing that the AI bias can be eradicated or at least minimized by having 'human in the loop'.  Is it really so?

The core idea behind this phrase 'AI bias', notwithstanding the great progress in AI and the call for 'AI autonomy' alongside, there is a limit up to which it can go and that is exactly where the human intelligence and intellect can not only intervene and also manage to get the upper hand. To delve somewhat deeper into the point, AI has limitations in being inherently 'schematic' while human beings are 'organic'. Then again, with the passing of time a question has come to the surface, indicating yet another turn in the debate: is 'human in the loop' really capable of enabling AI to get rid of bias?

One cannot undermine the fact that while the human factor in manning AI is being promoted here is a counter-trend too. A number of leading experts in AI studies seemed to be confidently predicting that by the middle of this century AI will witness such phenomenal growth that by being a 'supplement' to the human brain, and thus making itself absolutely indispensable, it will 'guide' the thinking and decision-making processes of human beings— be it in the political, economic, or commercial domain. The crux of the argument is that with the possibility of AI attaining new heights in 'superintelligence', anytime soon it may overwhelm human intelligence. It implies not just faster decisions but more 'reasoned', 'objective', and 'accurate' decision-making ecosystems. One cannot be totally dismissive about such a claim and call it 'false' mainly because it comes from experts who are intensely involved for decades in AI research.

There is also the vital issue of human understanding of AI when one seeks to rely on the 'human in the loop' logic. It is common knowledge that AI is moving fast and in multiple ways, and it is not easy to come to terms with its development including the AI bias. The matter is made even more complicated by the fact that there is a misperception or even mistrust among users when it comes to AI applications. This, in turn, leads to a number of legal, economic, and ethical questions and issues which are to be addressed and negotiated by 'human in the loop' not only carefully but also successfully. What is important to note here is that if AI superintelligence is materialized to the full brim and if users remain laggards in understanding its functions there may come a day when the AI-led decisions will be prioritized for simple pragmatic reasons over human-mediated decisions.

One need not be hyper-enthusiastic in forecasting a specific time during which AI is going to supersede human beings. There are many adversarial factors confronted by AI, which include its lack of ability to identify a specific context and to react accordingly. Also, AI also frequently becomes a victim of hacking, which severely undermines its credibility and autonomy. Yet, as the discussion reveals, situating a 'human in the loop' strategy in a routine manner in an ultra-dynamic situation will not be a viable solution as such.

So, it is not a win-win situation for those who advocate the 'human in the loop' strategy. Nor is it so for those who sing the tunes of AI's 'unbound autonomy'. In fact, there has to be the search for the till-now-elusive optimal point which will make a judicious blend, with appropriate governance regulations as the backup support, of AI superintelligence and human intelligence to serve the interest of users at large.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net