EU's AI Laws Set to Begin: Compliance and Challenges

Navigating the new era EU’s AI Laws: Challenges and Compliance
EU's AI Laws Set to Begin: Compliance and Challenges

This year, with the world flooded with innovative creations, the European Union has embarked on the visionary process of defining the framework for the effective regulations of Artificial Intelligence. Let’s meet the new Artificial Intelligence Act—a progressive legislation designed to outline the purpose of regulating Artificial Intelligence to try to achieve three goals at once: encourage innovation while respecting people’s rights and liberties and prevent misuse of AI.

As we begin to see a global onslaught of proposed regulations, they all boil down to four things: decisional transparency, personal data protection, user control, inclusion of automated decision-making, and training.

It took the EU several years to develop and agree on the general principles and key non-binding guidelines for the regulation of AI applications, which has recently received a comprehensive legal regulation. These are not the only regulations adopted in the EU to date on such issues, and as the first of its kind globally, they are likely to have many ripple effects not only within the union but other countries struggling to find ways to deal with the emergence of AI technological advancement.

So, what does the AI Act stand for?

In its essence, the AI Act is a process of trying to arrive at a comprehensive but yet harmonious regulation of artificial intelligence development for Europe in a way that enables innovation while respecting the EU Charter of Fundamental Rights, democratic values, and the environment. To this extent, the approach it has adopted towards achieving these objectives is a risk-based approach that places disparity obligations on different applications of AI depending on the risk they portend to pose.

This article aims to shine a light on why the new EU AI Act or EU’s AI laws, which on the surface may appear to provide organizations with considerable time to prepare for AI regulation and compliance, really has more in store than just governance of the use of private data. While privacy remains exclusive to a few and specified individuals, AI, on the other hand, is a much broader concept that involves the entire organization. Both governance and compliance are cross-departmental procedures in monitoring the entire system and all technologies applying to the company, and they are in charge of a significantly higher number of processes than privacy. Preliminary steps for compliance will need to be taken now, even within one year of the planned timeframe for their implementation. It is now time to discuss what the EU AI Act itself provides and requires.

Unpacking the EU AI Act: Privacy to EU's AI Laws Compliance

The EU Artificial Intelligence Act is a legislative instrument that seeks to regulate the development, deployment, and use of artificial intelligence technologies across the European Union. Based on these predictions, the act will almost certainly be ratified, in which case the EU AI Act will come into force in 2025. The Act covers almost everything from generative AI to the more notorious AI face recognition surveillance systems, or the facilitation of the latter is effectively banned, albeit for the police force. Accessibility of information, professional education and training, and Use of decision support systems were highlighted in the 2021 version and are highlighted in the 2023 version.

The EU’s AI laws impact social scoring where AI is ranking or labeling individuals based on important issues and profiles that control their actions making negative infringement on civil liberties that ultimately lock out individuals out based on AI’s determination of their worth. Imposes penalties of up to €35 million euros ($38 million USD) or up to 7% of a company’s worldwide sales for noncompliance (European Union, 2019).The Act will be voted in early next year, with some areas potentially being refined and will encompass seemingly endless aspects of bias, discrimination, privacy, and training.

Guidelines for Generative AI

If GPAI can be defined, the former means that it needs to meet certain transparency standards such as providing technical documentation of the model, following copyright law, and summarizing their training data. While generalizable pattern-finding is not problematic, the current proposals for GPAI or similar organizations involve avoiding misuse and ensuring that high-risk works are consistently assessed for systemic risks.

What’s Prohibited?

The Act formally prohibits the use of specific AI-driven services that can infringe on human rights and the democratic process. It bans the use of systems that sort and have adverse impacts on user outcomes based on sensitive attributes such as political views or race, scraping facial images without consent and for recognition databases, emotion recognition in employment and education, social scores, deceiving people, and systems that take advantage of individuals’ circumstances.

Sanctions and Next Steps

Non-compliance with EU’s AI laws and regulations will land in fines ranging from €35 million or 7% of global turnover to lower fines of 7.5 million euros or 1.5% of turnover. Thus, similar to the situation when the GDPR was enacted, we should witness a similar penalty trend progression. Since May 25, 2018, GDPR has imposed only 1 sanction, where A fine occurred for the amount of €400,000 in July 2018. Finally, in January 2019, 9 sanctions were given for a total of €50,437,276 fines only, though the number of sanctions has raised significantly in the following years with 1,926 sanctions and €4,418,042,604 this month only. This is also likely to be the trend for EU AI Act sanctions that commence in 2025.

Law Enforcement Safeguards

In turn, for the purpose of employment of the biometric identification systems (RBI) in public areas, the Act provides measures for prior judiciary approval and limiting the “post-remote/routine” and “real-time” RBI usage for a targeted crime or an extant/ existent terrorism risk.

Obligations for High-Risk Systems: EU's AI Laws Challenges

The urgent problem of high-risk AI systems that are hazardous for people suggests fundamental rights impact tests that concern the insurance and banking industries. These high-risk systems are accountable to citizens and can, therefore, be subjected to legal challenges where decisions were made that affected the rights of citizens.

With regard to the indicators to be included in the measures for innovation and the promotion of SMEs, the following can be put forth: To encourage the growth of organic AI applications, legal testing through the use of regulatory sandboxes and real-world testing is encouraged for cultivating AI solutions before deployment.

International Connection and Resulting Influence on Generative AI

The AI Act, though, targets the EU’s 450 million population and could shape global AI polices since the EU serves as a setter of tech regulative policies. The Act specifically covers GPT from OpenAI, and rules it as transparent and formulating tighter regulation for sophisticated models. Comparison with Other Countries’ Approaches One of the sensitive issues when it comes to the study of political systems in different countries is the comparison with other approaches.

The EU’s AI laws and regulations regarding the use of AI have been kicked off by both the US and China. In voluntary commitments with technology firms, the US has a reformist policy, while China has legally proscribed AI for private uses but utilizes facial recognition extensively for state uses.

In recent weeks, criticisms have been made on the EU’s proposed AI Act to set the rules for AI applications as it is interpreted to be more of an ambition to organize AI in a human-friendly way. There are still some concerns about the regulating of AI; for example, the president of France, Emmanuel Macron, argues that the EU’s Artificial Intelligence Act is potentially stifling the European digital industry and pull it down behind the technological market leaders in the United States of America, United Kingdom, and China where even the most apparent rules are still under discussion.

Wrapping up with the final thoughts

The EU AI Act marks a crucial turning point in the decades-long effort to establish ethical standards and responsible use of AI. It's not without flaws, and every business will have its grievances regarding compliance demands. Nonetheless, it's a rule with substantial power and is a component of an expanding movement aimed at safeguarding the public from the potential dangers of new technologies like AI. It also serves as a mechanism for businesses to safeguard their reputations and financial interests, even if they were not seeking such support.

Companies must continue to focus on innovation, as we are truly in an era of remarkable technological advancements. However, ensuring adherence to regulations and protecting their organizations from the erosion of trust that follows ethical violations should be a top concern for all senior AI leaders, regardless of their location within the EU or beyond.

Related Stories

No stories found.
Analytics Insight