Building ‘Human-Centric AI’ for Secure and Liable Future

Building ‘Human-Centric AI’ for Secure and Liable Future

European Union's regulations on AI draw a line between development and exploitation

Artificial Intelligence (AI) is widely heralded as an ongoing revolution transforming science and society together. The shift towards automated decision making (ADM) and AI are predicted to change society beyond recognition. However, AI is in immense need of guidelines as it could turn an unauthorised application harmful.

Applications of artificial intelligence like machine learning, deep learning, neural network and big data are reshaping data processing and analysis. Autonomous and semi-autonomous systems are being increasingly used in a variety of sectors including healthcare, transportation, manufacturing, education, business, etc. Ultimately, AI is invading every sector possible. At the time of its powerful transformative force and profound impact across various societal domains, the debate on its principles, values and ethics sparks a ray of rage and fear.

Governments and joint committees across the globe are trying a figure a way out of the fear. Even though when developments of AI today look amplifying, it has the ability to turn harmful if victimised at bad hands. Henceforth, it is safe to have rules and regulations that project a framework on what AI and its technologies should and shouldn't do. The provided rules must uphold fundamental rights, ethical aspects, regulatory safeguards and liability. It should protect the democratic societies and citizens as users and consumers.

European Union's AI Legal Framework 

European Union (EU) is one the largest international committee with 27 member states. Almost more than half of the countries that come under the EU have command over international decisions. Artificial intelligence development in the countries that come under the EU is also growing at lightning speed. Every day, they have something new in their bag. Henceforth, the committee has come up with regulatory norms to draw a line between development and exploitation. EU outlines the weakness in the current legislation and introduces new options for future measures on AI. While the committee's guidelines included rules on safety, liability, fundamental rights and data, they also made sure that the framework did not affect the growth of the AI market in Europe.

The commission's parameter envisages mandatory requirements for high-risk AI applications and peeps up testing for high-risk AI technology before its deployment or sales within or outside the EU market. The committee ensures that citizens have the best chances to reap the benefits of digital transformation, while at the same time avoiding potential harm to society and individuals.

EU supports the research & development part of AI and spikes investments in the category. The union is trying to come up with future-proof laws that will play a citizen-oriented digital transformation where nobody is left behind. EU has made funding on AI available through programmes such as the Digital Europe Programme and Horizon Europe. However, it should also address the participation gaps that exist in current funding programmes.

The commission undertakes a comprehensive exercise to identify any disruptive developments brought about by the application of AI in the society across the board. Widespread uptake of funding by companies across the union is of utmost importance as keen attention is necessary over risks associated with new technologies. The EU's rules on consumer protection, safety and liability, as well as those dealing with privacy and transparency, should be updated to make sure they are fit for purpose and consumers are protected, particularly if AI products cause them harm.

It is very important to have a 'human-centric' AI system. The human-centric AI will have people controlling it from every end minimizing the risk it could exploit. EU's top priority for a future is to continuously renew itself. The constant changing and updating of rules according to the emerging technologies will keep the societal norms on AI at the track.

Other International Union's Guidelines on AI

OECD and partner countries

The OECD's 36 member countries along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania signed up to the OECD Principles on Artificial Intelligence that formally adopted the first set of intergovernmental policy guidelines on AI. The agreement upholds international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy. It is elaborated with guidance from an expert group formed by more than 50 members from governments, academia, business, civil society, international bodies, the tech community and trade unions. The Principles comprise five value-based principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation. They aim to guide governments, organisations and individuals in designing and running AI systems in a way that puts people's best interests first and ensuring that designers and operators are held accountable for their proper functioning.

United Nation members

United Nation member countries are constantly trying to put regulations of the growing AI influence on their countries. This was brought to light through a survey of international organizations describing the approach of UN agencies and regional organizations that they have taken so far. As the regulation of AI is still in its infancy, guidelines, ethics codes, and actions from governments and their agencies on AI are also addressed. While the country survey looks at various legal issues, including data protection and privacy, transparency, human oversight, surveillance, public administration and services, autonomous vehicles, and lethal autonomous weapons systems, the most advanced regulations were found in the area of autonomous vehicles, in particular for the testing of such vehicles.

Canada was the first country to launch such a national AI strategy in 2017. The strategies and action plans highlight, among other things, the need to develop ethical and legal frameworks to ensure that AI is developed and applied based on the country's values and fundamental rights.

South Korea in 2008 enacted a general law on the 'intelligent robot industry' that, among other things, authorized the government to enact and promulgate a charter on intelligent robot ethics. However, it appears that no such charter has yet been enacted.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net