

The ethical issues of artificial intelligence have become much more tangible as machine learning algorithms and big language models have begun to affect industries and societies in the real world. AI's theoretical risks raised the first ethical questions, whereas now its impact is becoming evident. As a result, governance frameworks need to evolve to address this change.
Recently, Priya Dialani of the Analytics Insight Podcast interviewed Vilas Dhar, President of the Patrick J. McGovern Foundation, about why the regulation vs. innovation dilemma is not an appropriate framing for AI governance. Below you may find selected excerpts from the interview:
The misalignment between an organization's ethical principles and its internal processes makes it difficult for enterprises to implement them. Many companies establish core values that include fairness and transparency, yet fail to implement them in product development, procurement, or even decision-making across other departments.
There is no built-in system that allows engineers and leaders to stop or question the unethical consequences of using AI solutions.
Ethics cannot be left to the AI itself because the responsibility lies only with humans, whether programmers, CEOs, policymakers, or users of these technologies. The AI does not have ethics in itself; rather, it is used to carry out functions as designed.
Describing AI technologies as ethical by default could be a way to reduce their accountability. It is important for each individual and institution involved in AI to carefully consider its consequences and implications.
The funding of the AI development process influences its direction and inclusiveness. The public sector has been responsible for most AI discoveries in the past. This funding has also been used for research aimed at benefiting society.
Funding from the private sector is beneficial to progress but tends to be profit-motivated. A balanced approach would involve investments in public infrastructure, talent development, and data openness. There should also be regulatory certainty to minimize uncertainty in this process.
Developing countries are already contributing to the development of global AI governance through their own unique regulatory models, grounded in practical implementation and its social implications. India, for example, has shown itself to be a pioneer in this area, creating public digital infrastructure and providing access to big data.
The involvement of the developing world in this process is important because it ensures that the developed governance model will take into account the diversity in this area, rather than represent the views of a handful of states.
With the increasing integration of AI technology into decision-making processes across various spheres of life, it becomes increasingly necessary to view it as an infrastructure that governs these processes.
Just like electricity or the internet, AI requires governance and international cooperation.
Listen to the full discussion on the Analytics Insight Podcast.