Skip to main content

AI regulation: A framework for the future

Read Time 6 Mins
Principles
AI Regulation

As Learnosity’s VP of Legal and Privacy, Jamie Armstrong needs to have a keen eye for important details—including all things AI. Here, he breaks down the key elements of the EU’s AI regulations, and gives his take on their import and impact.

Q. AI is developing at a blistering pace. The EU’s AI regulations are a clear response to this. Can you tell us a little about how they came to be and describe what the governance architecture currently looks like?

JA: It’s true that the rapid development of AI technology has been a catalyst for the proposal of laws and regulations. However, it’s also interesting to note that in the case of the EU AI Act, this law was proposed prior to generative AI tools being made widely available to the public. 

The EU AI Act was proposed in April 2021 by the European Commission, as part of a broader European AI Strategy. This is all part of the EU’s objective of building what has been referred to as a “resilient Europe for the Digital Decade”. 

Regarding governance, as indicated above it’s important to remember that whereas generative AI is top of mind with most people currently, the EU AI Act applies more generally to all AI. In fact, the EU AI Act was modified at a fairly late stage to specifically address what are referred to as general purpose AI models, which means products like Chat GPT, Claude, Mistral, etc.

The applicable governance requirements in the Act depend on the role of an organization with respect to AI (for example, whether it is a provider or deployer) and the risk classification of the AI system. Unsurprisingly, AI systems with higher risk will require more significant governance architecture. 

It is expected that the Act will come into force in August this year…the provisions applying to high-risk AI systems will likely apply starting from August 2026.

Finally, a crucial note is that the EU AI Act is not yet law that is in force. As of time of writing [August 2024], it is expected that the Act will come into force in August this year. Some parts of it will not apply until a later date—for example, the provisions applying to high-risk AI systems will likely apply starting from August 2026.

Q. Some notable names, such as Max Tegmark and Geoffrey Hinton, have publicly argued that only strict regulation on AI will help prevent unintentional harms from arising. How strict are the EU’s regulations? In your view, do they go far enough?

JA: There are many different views on this question. 

Some actors, such as businesses seeking to use AI systems to quickly provide new products, may see the EU AI Act as being very strict. Other companies might view it as a welcome development, particularly those that are well-resourced and positioned to meet the requirements that will apply to them. At the other end of the spectrum, some people think that the Act does not go far enough, though typically for specific reasons related to things like law enforcement use of AI to identify individuals through real-time ID systems.

There are some parallels here to not-so-distant developments in data protection laws, specifically the EU General Data Protection Regulation. When the GDPR was finalized in 2016, it was generally recognized as being a strict provision. However, over time most organizations have become comfortable with operationalizing its requirements. The difference here is that the GDPR replaced pre-existing data protection specific law, whereas the EU AI Act is really ground zero when it comes to regulation of AI specifically.

The difference here is that the GDPR replaced pre-existing data protection specific law, whereas the EU AI Act is really ground zero when it comes to regulation of AI specifically. Share on X

My view is that we need to see greater clarity on many parts of the EU AI Act so that businesses can better understand what they need to do to comply. However, this is not untypical with new laws. The issue is magnified in this case because of the huge public attention on AI and the significant penalties that can be applied for failure to comply with the EU AI Act’s requirements.

Q. Innovation requires a degree of freedom, while safety depends on adhering to established rules or principles. There’s a natural tension between the two objectives. As they currently stand, do the EU’s regulations achieve a harmonious balance between them?

JA: That’s a great question. The EU certainly considers it has aimed to strike an appropriate balance between fostering innovation and the development of AI systems on the one hand, and ensuring this is done in a way that respects the rights and safety of individuals. As I mentioned before, the EU AI Act adopts a risk-based approach to regulation. 

At the sharp end, the Act identifies AI systems considered to have unacceptable risk to people—such as AI systems that exploit specific groups, remotely monitor people, or are used in social scoring. 

At the other end of the scale are minimal risk systems, such as recommendation engines and spam filters, which are permitted without any mandatory requirements due to their limited risk to safety. In the middle are high risk and limited risk systems, which are allowed but in the case of the former have to satisfy a number of requirements. Examples of AI systems that are considered high risk are those used in employment, education and healthcare, due to being considered as significantly affecting the individuals using them or with respect to which decisions are taken based on the use of such AI systems. 

As noted before, there are a range of views here, but a widely held view at this stage is that the requirements applicable to high risk systems are quite onerous, or alternatively not all AI systems on their face within the high risk category ought to be subject to such onerous requirements.

Q. As a global company with a European base, how do the regulations impact/influence our work in AI at Learnosity?

JA: Any organization seeking to place an AI System on the market or put such an AI System into service in the EU will have to comply with the EU AI Act. Learnosity is no different. As a business that’s headquartered in Ireland, it’s important that we comply and help our customers comply with the EU AI Act.

We’ve seen from experience that the EU frequently leads the way in regulation with respect to data and typically sets the high bar in terms of requirements, with other countries and territories then following with their own laws. Being subject to EU laws can in this way be seen as an advantage, as such organizations are usually best prepared to comply with laws that later arrive elsewhere.

Preparing for AI-specific regulation, such as the EU AI Act, requires a cross-functional effort between many different parts of a business, including product development, support, security, legal, privacy and others, depending on the size and complexity of the organization. This is important to do promptly so that the development of AI systems aligns with the applicable legal requirements from the outset. 

Learnosity has established AI-specific policies and procedures as well as cross-functional AI governance comms to ensure that the business is aware of the developing legal landscape.

Learnosity has established AI-specific policies and procedures as well as cross-functional AI governance comms to ensure that the business is aware of the developing legal landscape. Privacy teams are often well-placed to take an active role in AI governance and compliance efforts due to the experience acquired in rolling out privacy law compliance programs.

Q. Nothing stands still in tech—but AI greatly contributes to the sense of flux. What do you envision might happen next in terms of regulation? In what kinds of ways will regulatory frameworks need to evolve to navigate the challenges of rapid change?

JA: I am most interested to see how the EU AI Act shapes regulation of AI systems in other parts of the world. 

As I mentioned earlier, once the EU lays down a marker, other places often follow with similar laws. However, we have generally seen two approaches develop with respect to AI regulation—one that involves passing new AI-specific law, such as in the EU, and another that encourages a softer approach around standards, codes of conduct and the like. 

In the US, two states have passed AI-specific laws this calendar year, with active bills remaining under consideration in several others. At the federal level, a number of executive orders have been passed concerning AI, as well as legislation concerning AI use in government and non-binding frameworks such as those published by NIST. But nothing as broadly-encompassing as the EU AI Act.

You are right to note that it is a challenge for regulatory frameworks to evolve with rapid change. Regulations reflect policy decisions with respect to (in this case) technology. Regulatory frameworks are always playing catch-up with technology. I mentioned an example of this earlier when the draft EU AI Act was hastily updated at a relatively late stage in the law-making process to address generative AI. 

I think the balance lies here in drafting regulatory frameworks that are technology neutral (as far as is reasonably possible) to anticipate advancements, while at the same time offering enough clarity that organizations know how to comply. I think we have to recognize the challenge here that lawmakers face and in this light leaving certain details open-ended for now so they can be more clearly defined at a later stage makes logical sense.

Micheál Heffernan

Senior Editor

More articles from Micheál