At times during 2023 it seemed like all anyone was talking about was generative artificial intelligence (“Generative AI”). Probably the most famous Generative AI offering, ChatGPT, was released to the public at the end of November 2022 and quickly became one of the fastest-growing consumer applications in history.
Since then, we have seen other Generative AI offerings based on large language models enter the marketplace, including Microsoft Copilot, Google Bard, LLaMA by Meta and countless others. In turn, businesses have been rapidly developing Generative AI products and introducing new Generative AI features to existing services in order to meet the demand of their customers to leverage these new technologies.
In this piece, I’ll outline what is currently going on in the European Union regarding the prospective regulation of artificial intelligence—including, but not limited to, Generative AI.
Among the daily flurry of news articles on AI, you may have noticed particular coverage relating to the EU AI Act at the end of 2023. I feel it might be useful to summarize here:
This is not intended to be a deep-dive but rather a quick summary. Although I am a lawyer, nothing in this blog post is legal advice.
The EU AI Act did not emerge as a response to the recent rise of Generative AI. In fact, the EU AI Act owes its origins to an earlier time, prior to even the 2018 entry into force of the EU General Data Protection Regulation (“GDPR”), which applies to the processing of personal data.
A key publication in the history of the EU AI Act was the April 2018 Communication from the European Commission on Artificial Intelligence for Europe, which set out a European AI Strategy with the stated aim of “making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy.” In the time since, a lot of work has been done within the EU institutions to propose (in 2021 by the European Commission) and refine actual text for the EU AI Act, with that process still ongoing today.
The EU AI Act will be the world’s first comprehensive AI law and will regulate the use of artificial intelligence in the EU. Consistent with the objective of the EU Data Strategy, the EU AI Act seeks to balance innovation and the benefits of artificial intelligence to society with the potential risks to the health, safety and fundamental rights of EU citizens. It will apply not only to EU developers and deployers, but also to organizations outside of the EU that make their AI systems available to EU users.
The EU AI Act seeks to balance innovation and the benefits of artificial intelligence to society with the potential risks to the health, safety and fundamental rights of EU citizens. Share on XAs indicated above, the EU AI Act will regulate AI more broadly than just the Generative AI that has been so often publicized recently. It will include a technology-neutral definition of AI and adopt a risk-based approach for AI systems with reference to the intended purpose of use of the system, ranging from unacceptable risk to limited risk.
Unacceptable risk AI systems will be banned (e.g. social scoring based on social behavior or personal characteristics and biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)), subject to some exceptions.
High-risk AI systems that may negatively affect fundamental rights (e.g. used in education and employment) will be subject to certain result-oriented requirements, including mandatory fundamental rights impact assessments in some cases and reporting serious incidents, and finally limited risk AI systems that will have to comply with transparency requirements.
The EU AI Act will also specifically regulate general purpose and Generative AI, with more stringent obligations expected to apply to general purpose AI systems carrying “systemic risk”. There will be significant fines to failure to comply with the EU AI Act, in some cases in amounts even higher than the GDPR.
As you can imagine, with many involved parties holding different perspectives and interests, and considering the arrival of Generative AI in the marketplace during the negotiation of the EU AI Act, the process of reaching final agreement on a text has been lengthy and challenging. In fact, a big challenge shortly before political agreement was reached in December 2023 was whether the EU AI Act should regulate foundational models at all, with some countries preferring codes of conduct only.
As noted above, in the end foundational models will be regulated by the EU AI Act. What is important to note as of today is that the EU AI Act has yet to be finalized and therefore has not yet entered into force or become applicable. The publicity in December 2023 was on account of a political agreement on the EU AI Act having been reached.
The EU AI Act has yet to be finalized and therefore has not yet entered into force or become applicable… it is likely that some further changes to details may be made in the coming months. Share on XThis means that many details have yet to be finalized and therefore we do not yet have a final text of the EU AI Act to review. While it’s expected that the final version of the EU AI Act will reflect the political agreement reached and that amendments to the most recent draft text from earlier in 2023 will be made accordingly, it is likely that some further changes to details may be made in the coming months.
Currently, technical meetings are ongoing as part of the EU law-making process to discuss and agree the final text of the EU AI Act. It was hoped that these technical meetings would conclude around the end of January 2024, following which the final text would need to be adopted by the European Parliament and Council and then formally published in the EU Official Journal. This seems likely to happen no earlier than the Spring of 2024.
After the final EU AI Act is published and in force, it is expected to become applicable in stages rather than immediately or all at the same time. Similar to the GDPR, this will give organizations that are providing and using AI systems a runway to come into compliance with the finalized requirements.
The provisions of the EU AI Act that apply to unacceptable risk AI systems will become applicable six months after the Act enters into force. For general purpose AI, the time period is one year, and most other provisions will be applicable no sooner than two years after the EU AI Act enters into force (i.e. likely no earlier than Spring of 2026). It is also possible but not yet confirmed that the final version of the EU AI Act may delay application of obligations for high-risk AI systems until three years after its entry into force (i.e. likely no earlier than Spring of 2027).
The EU AI Act is a significant forthcoming law that will regulate AI systems. Organizations that are providing and/or using AI, including but not limited to Generative AI, will need to study the final text of the EU AI Act, once available, in order to determine their obligations and undertake the necessary compliance activities. It will be important to see the detail of the final text, as the political agreement leaves many finer points to be resolved in the technical meetings that are presently ongoing.
To learn more about Learnosity’s work in AI, visit our Author Aide page.
Or for an overview of AI’s impact on assessment, download your copy of our AI guidebook. 👇