Artificial Intelligence

Your Government Might be Taken Over by AI and You Won’t Even Know

Written By : sirisha

There is always a thin line between what you need to see and what AI can tell

Government, essentially means taking decisions on behalf of people for their well-being, and accountability, transparency, and responsiveness constitute its main tenets. Artificial intelligence is invading the governance systems and Governments are largely depending on algorithms to take decisions or rather allowing them to take decisions. Governments that come to power to balance the hi-brow strategies of corporate honchos have somehow got themselves deep into the mire of black-box algorithms. No doubt AI and the algorithmic magic it spins has benefits that go beyond the commercial motive of businesses but there is a thin line between what you need to see and what AI wants to show. Corporate companies, at least for time being, can get away with the hide-and-seek game of AI algorithms, but can governments take the baton of irresponsibility from AI if things go wrong? With the US election date fast approaching, Electronic Privacy Information Center(EPIC), an independent nonprofit research center based in Washington DC has come out with a research report with rare insights into how Governments are (mis)using predictive analytics for crucial public welfare decisions, that too stealthily. The finding holds huge implications even beyond DC's boundaries as it finds evidence of authorities from other cities using bureaucratic algorithms across various departments to make decisions. They have concluded that there might be more algorithms at work than they can find out.

The report says that there is substantial evidence of lending institutions using algorithmic tools to target low-income communities for predatory loans, an example of digital predatory inclusion also called 'reverse redlining' or 'algorithmic redlining'. It is a smart way of including consumer data to find financially vulnerable borrowers for deceptive sales tactics. The report cites numerous other examples from education, health care, and law order to demonstrate how algorithms are deceptively putting the public interest at risk. The report compares the ADM( Automatic Decision Making) to a judge who can be autocratic, ie., not bound to explain his judgment to the people – which in a real case scenario is outright unconstitutional.  That apart, the ADMs do not have appeal system, which makes them a system worth dissecting into parts. Ironically, public agencies, the report says, do not consider it essential. They provide private companies' proprietary business model and the possibility of fraudsters taking advantage as reasons, which are absurd and self-serving, for not attempting to understand the internal logic.

Then the question of how far the human decisions are bias-proof comes into the picture. Policymakers and watchkeepers are aware of these factors and have been working towards designing mechanisms of checks and balances. It doesn't imply the governance system should outsource its discretionary functions to opaque and untested automated systems. The report elaborately brings out how the automated systems standardize status-quo in the social system. For example, ShotSpotter, an ADM system designed to control gun violence couldn't detect gun control. They were placed exclusively in black and minority dominated areas and 90% of the time the data collected would not yield into any significant leads. However, a few false Shotspotter's alerts would send police into communities whose members are vulnerable to police violence. EPIC suggests citizens can seek redressal in case of algorithms delivering injustice or being unfair, by demanding the Governments disclosure of working details of algorithms at work. In some cases, elected officials have vouched for having public registries of automated decision making systems. But researchers warn of the risk of running into being eye-washed by registry systems. A research paper published by Oxford University, mentions how Amsterdam's AI registry omits certain problematic tools calling it an "ethics theater".

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Dogecoin Set for 200% Rally? Here's the Right Time to Buy

Dogecoin Price Prediction: DOGE Aims for $2 in 2025, While Viral DOGE Killer Under $0.002 Shows Potential for 22,782% Rally

ADA and TAO Show New Patterns While BlockDAG Grabs Attention With $2M Summer Raffle

Bitcoin’s 2025 Bull Springboard: Why Diversifying Into Altcoins Makes Sense

Shiba Inu Price Prediction: Can SHIB Rally to $1, or Will Little Pepe (LILPEPE) Overtake It and Hit $20 Billion Market Cap by 2026?