AI Governance Needs Institutions, Not Just Regulation: Vilas Dhar Explains the Real Challenge

Why Building Durable Public Institutions Matters More Than Regulation in AI Governance
Vilas-Dhar.jpg
Written By:
Market Trends
Published on
Updated on

The ethical issues of artificial intelligence have become much more tangible as machine learning algorithms and big language models have begun to affect industries and societies in the real world. AI's theoretical risks raised the first ethical questions, whereas now its impact is becoming evident. As a result, governance frameworks need to evolve to address this change.

Recently, Priya Dialani of the Analytics Insight Podcast interviewed Vilas Dhar, President of the Patrick J. McGovern Foundation, about why the regulation vs. innovation dilemma is not an appropriate framing for AI governance. Below you may find selected excerpts from the interview:

Why are AI Ethicality Frameworks Often Ineffective at Execution?

The misalignment between an organization's ethical principles and its internal processes makes it difficult for enterprises to implement them. Many companies establish core values that include fairness and transparency, yet fail to implement them in product development, procurement, or even decision-making across other departments.

There is no built-in system that allows engineers and leaders to stop or question the unethical consequences of using AI solutions.

Who Holds Responsibility for Ethical AI Systems?

Ethics cannot be left to the AI itself because the responsibility lies only with humans, whether programmers, CEOs, policymakers, or users of these technologies. The AI does not have ethics in itself; rather, it is used to carry out functions as designed.

Describing AI technologies as ethical by default could be a way to reduce their accountability. It is important for each individual and institution involved in AI to carefully consider its consequences and implications.

How does Funding Influence the Process of Creating Responsible AI Technologies?

The funding of the AI development process influences its direction and inclusiveness. The public sector has been responsible for most AI discoveries in the past. This funding has also been used for research aimed at benefiting society.

Funding from the private sector is beneficial to progress but tends to be profit-motivated. A balanced approach would involve investments in public infrastructure, talent development, and data openness. There should also be regulatory certainty to minimize uncertainty in this process.

How Can Developing Countries Influence Global AI Governance?

Developing countries are already contributing to the development of global AI governance through their own unique regulatory models, grounded in practical implementation and its social implications. India, for example, has shown itself to be a pioneer in this area, creating public digital infrastructure and providing access to big data.

The involvement of the developing world in this process is important because it ensures that the developed governance model will take into account the diversity in this area, rather than represent the views of a handful of states.

Why is it Necessary to See AI as Infrastructure Rather Than a Product?

With the increasing integration of AI technology into decision-making processes across various spheres of life, it becomes increasingly necessary to view it as an infrastructure that governs these processes.

Just like electricity or the internet, AI requires governance and international cooperation.

Listen to the full discussion on the Analytics Insight Podcast.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net