Author Sophia Antonopoulou – Associate
It’s already been more than a year since the Artificial Intelligence Act (AI Act), officially known as Regulation (EU) 2024/1689, was enforced for all member states on the 1st of August 2024. The AI Act, influenced by product safety regulations, introduced a risk-based approach by classifying AI systems, based on their potential impact on fundamental rights. However, the meaning of “risk” has always been vague and raises many questions for businesses that use or produce AI Models.
The AI Act defines 4 levels of risk for AI systems:
- Unacceptable risk: This includes all AI systems that are considered a clear threat to the safety, livelihoods and rights of people, such as harmful AI-based manipulation and deception, social scoring, individual criminal offence risk assessment or prediction and other prohibited practices referred to in Article 5 of the AI Act. These AI systems are explicitly prohibited (only under certain conditions, though, not entirely).
- High risk: AI use-cases that can pose serious risks to health, safety or fundamental rights are classified as high-risk. This includes AI safety components in critical infrastructures (e.g. transport), AI solutions used in education institutions, AI systems used for remote biometric identification, emotion recognition and biometric categorisation and other practices referred to in Article 6 of the AI Act. Pursuant to Section 3 of the AI Act, high risk AI systems are not prohibited per se, but are subject to strict and specific obligations before they can be put on the market, such as logging of activity to ensure traceability of results, detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance, including conformity assessments and fundamental rights impact assessments (FRIAs) and other obligations included in Section 3 of the AI Act.
- Transparency risk: This refers to the risks associated with a need for transparency around the use of AI. The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision.
- Minimal or no risk: The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters.
Therefore, it becomes clear that identifying and mitigating risks in AI systems is crucial for compliance with the AI Act. Furthermore, there are specific and practical risks that need to be taken into account by businesses. These include:
- Biases: Given the fact that society and humans are biased, these biases can be transferred into AI systems via the training data and the AI system’s design, discriminating against gender, unrepresented populations and providing false results. That’s why it’s very important to monitor the whole lifecycle of the AI system, create practices that promote fairness and equality and integrate AI Ethics, use representative and accurate training data, form diverse development teams and always keep a human in the loop.
- Cybersecurity threats: The digitization of life and activities, provides space for cyber attacks and exploitation of vulnerable systems, threatening human rights and assets. That’s why it’s crucial for businesses to map and recognize their security needs and risks, build a clear and holistic cybersecurity strategy and invest time and resources to train and educate their staff and implement strong cybersecurity solutions.
- Data privacy issues: GDPR has been a game-changer tool for privacy and personal data protection and it applies to AI systems, as well. GDPR, its processes and fines, are still the main way for a person to pursue their rightful claims in case of a violation by an AI system. That’s why businesses need to keep in mind that they still need to fully comply with GDPR even in the case of AI systems.
- Environmental harms: The planet is experiencing a climate crisis, so it’s imperative to keep that in mind, in order to develop environment-friendly practices and products. Training algorithms on large data sets and running complex models require vast amounts of energy, contributing to increased carbon emissions. Also, water consumption is another concern, since many AI applications run on servers in data centers, which generate considerable heat and need large volumes of water for cooling. It really could make a difference if business considered and supported data centers and AI providers that are powered by renewable energy, chose energy-efficient AI models or frameworks, as well as trained their AI models on less data and simplify model architecture.
- Intellectual property: Generative AI models are widely used to generate works, such as text and images. These models involve the creation of digital images and text and are trained using works of art and data available in databases and open-accessed on the Internet, whether protected by copyright or not. In case where the data/works are protected by already existing copyright rights, their use as training data and the subsequent creation of a work based on them, could be considered as a copyright infringement. That’s why it’s important for businesses to make sure they comply with laws regarding protected works that might be used to train AI models, be cautious and aware of the training data they use, in order to avoid exposing their company’s IP or the IP-protected information of others, as well as monitor AI model outputs for such exposures.
- Reskilling the staff: Investing in the training of employees and education of management is crucial for businesses both for the digitization of a company so that the employees gain new skills without losing their jobs and being outdated, and for making sure that the business is compliant with the applicable law.
- Accountability, explainability and transparency: Accountability, explainability and transparency have always been the main focus points of the EU legislation and guidelines regarding AI. AI models are often perceived as black boxes, given the fact that it’s still difficult to understand exactly how they work and fully predict their outcomes, there is still skepticism around them and, of course, many interests. In order to comply with these requirements, businesses need to have specific policies and mechanisms in place regarding the function of AI models and the handling of possible issues and violations, always clarify who is responsible for the AI model and keep humans in the loop, with audits and review procedures.
- Misinformation and manipulation: While AI can make life and everyday tasks easier, it can also be used in a malicious manner, intensifying misinformation and manipulation, via deepfakes, fake news and AI hallucinations. In order to cope with these matters, businesses need to educate users and employees on how to spot misinformation and disinformation, make sure their information is authentic and accurate, use high-quality training data, review and check AI models continuously throughout their whole lifecycle, as well rely on human oversight.
It’s worth noting that the European Commission has proposed the Digital Package on Simplification, which proposes amendments to simplify the AI Act implementation and boost innovation. The legislative proposal has been adopted on the 19th of November 2025 and the European Parliament and the Council of the EU are currently discussing and negotiating the Digital Omnibus on AI. This proposal has been criticized, though, because although it aims to simplify procedures and bureaucracy, especially for small businesses, it limits fundamental protections, such as those referring to personal data, biometric technologies and user autonomy. The fate of this reformation remains to be decided.
In any case, businesses should stay alert and take all the above into consideration, in order to detect and face risks, as well as design and use compliant and ethical AI models. Because knowing and understanding the modern era and the world of AI is both a tool for compliance and a competitive advantage.