This month, a big step in AI regulation was taken. The European Parliament and the Council of the EU have officially adopted the Artificial Intelligence Act,* the world’s first binding and comprehensive framework for AI. Since I teach AI-related courses and use Deep Learning for research, I decided to read the Act and see how it might affect our research and teaching. Just a heads-up: I’m not a lawyer, and this article isn’t meant to be legal advice.
A key aspect of the regulation is how it classifies AI systems based on the risk they pose to individuals and society. The first category includes AI systems with unacceptable risk, which will be banned from the EU market starting February 2 2025. This category includes dystopian uses like social scoring or predicting someone’s criminal behaviour based on unrelated characteristics like nationality, place of birth, or type of car. Additionally, the regulation will ban AI systems that manipulate and deceive people using cognitive behavioural techniques and exploit vulnerabilities related to age or mental and physical disabilities.

These applications usually don’t have a big impact on universities, with one exception: AI applications for emotion recognition in educational and workplace settings. These will also be banned. For example, it will no longer be legal to infer students’ emotions, like boredom or excitement, from their facial expressions during lectures. I haven’t used such systems myself, and I find them deeply dehumanizing. So I think it’s a good thing they will be banned.
The next category is high-risk AI systems, which can cause serious harm to society and individuals if they go wrong. These include AI used in critical infrastructure, HR tools, healthcare, insurance, law enforcement, migration, and democratic processes. Developers and deployers of high-risk AI applications will have a lot of compliance obligations, like assessing and mitigating risks, providing technical documentation, and keeping use logs. People affected by high-risk AI will have the right to submit complaints and demand a “clear and meaningful” explanation of why a certain decision was made. All this means a lot of work for AI companies, which is why they are given extra time to prepare: most rules for high-risk AI will start on August 2, 2026.
How could this affect teachers and students? In my moments of weakness, I dream of AI taking over the grading workload. If this happens, the evaluation would also be classified as a high-risk application under the “evaluating learning outcomes” category, specified by the Act. The regulation also mentions other high-risk applications in education: admissions decisions, placement tests, and identifying misconduct during exams. Ultimately, this could mean even more work for teachers if they decide to use such algorithms.

The next category are general-purpose AI (GPAI) applications, such as large language models and chatbots. The key requirement for these systems is transparency. Providers of GPAI must maintain up-to-date technical documentation and publish summaries of copyrighted data used for training. Additionally, deepfakes—artificially or manipulatively generated images, audio, or video content created with AI—must be clearly labeled. If you use AI to generate illustrations for your lectures or papers, you will need to acknowledge this usage. But this is what most people do, anyway.
All other AI systems are seen as minimal risk and may be covered by voluntary codes of practice.
Note that private use of AI is not regulated by the Act, so if a professor uses ChatGPT for brainstorming lectures or editing emails, it’s totally legal. Likewise, a student can use a large language model to prepare their assignments unless the university’s code of conduct says otherwise.
The regulation doesn’t apply to AI systems developed just for research. However, researchers might still need to follow their university’s code of conduct. For instance, if someone creates a chatbot to help people quit smoking and wants to test it through an experiment, they have to stick to the ethical norms set by their university. But once the chatbot moves from the research world to healthcare, the Act requires providers to let users know they’re interacting with a chatbot. This rule wasn’t followed when thousands of users of Koko, an online emotional support chat service, got messages from a large language model without knowing it (see more here). Such practices will be illegal under the new regulation.
After reading the regulation, I still have a lot of questions. For instance, I’m not sure what the status of plagiarism detection AI software will be, or software that differentiates between human-generated and AI-generated text. We use automatic plagiarism detection as part of evaluating students’ assignments in our department. Since it could unfairly harm a student if they’re wrongly accused of plagiarism, should this software be considered high-risk AI, with all the compliance red tape that comes with it?
Globally, these regulations will increase demand for AI experts in both government and business. Plus, the Act explicitly requires AI system providers and deployers to ensure their staff have enough AI literacy. This gives a competitive edge to graduates with basic AI knowledge, no matter their field. So, we’re likely to see more young people choosing study programs with a strong AI component.
* The full text of the regulation published on July 12 2024 can be found here: https://eur-lex.europa.eu/eli/reg/2024/1689/oj.