How AI interplays with data privacy regulations in Greece and changes the game

Author Sophia Antonopoulou – Associate

Artificial Intelligence (AI) has been developing for years, so it comes as no surprise that it is now unstoppable, reaching new possibilities and revolutionising every sector globally. The EU and, of course, Greece are no exception.

However, as AI technologies become increasingly integrated into business operations and public services, they present unique challenges to fundamental human rights, such as data privacy. Therefore, it’s important to explore key points on how AI interplays with data privacy regulations in Greece, compared to international trends.

In Greece, the adoption of AI tech has compelled a re-evaluation of existing data protection frameworks and procedures. The General Data Protection Regulation (GDPR), and the Greek Law 4624/2019 which was passed to specify provisions of GDPR, serves as the cornerstone of data protection in the country. However, the unique characteristics of AI, such as machine learning and big data analytics, present wider challenges to traditional data protection models and the fundamental human rights. For example, AI systems often require vast amounts of data, raising concerns about consent, data minimisation, and the potential for bias, ending up in discrimination.

To address these issues in the public sector, Greece enacted Law 4961/2022 on emerging information and communication technologies, the reinforcing of digital governance and other provisions. This legislation underlines transparency in AI applications implemented and used by public authorities. Specifically, according to Article 5 of Law 4961/2022, any public sector entity using an AI system is required, before the system becomes operational, to conduct an algorithmic impact assessment. It should be pointed out, that this is a separate process and does not replace the obligation to carry out a Data Protection Impact Assessment (DPIA), as provided in Article 35 of the GDPR, to identify and mitigate risks that are linked with the personal data processing by AI systems. Notably, the provisions of Law 4961/2022 are designed to complement, not override, the rights and obligations established under the GDPR and Greek Law 4624/2019.

On an international level, it is widely known that in the United States, AI regulation appears to be fragmented across different sectors and levels of government, with federal agencies, state governments, and industry-specific rules shaping AI governance. Some key regulatory developments that are noteworthy are the Executive Order on Safe, Secure, and Trustworthy AI (October 2023), which directs federal agencies to establish guidelines for AI safety, security, and ethical use, the Blueprint for an AI Bill of Rights (2022), outlining principles like privacy, algorithmic discrimination protections, and transparency, as well as the California Privacy Rights Act (CPRA), which regulates AI-driven automated decision-making. It becomes clear that there is no single comprehensive federal law governing AI in the USA, which aligns with the USA’s intention and direction to prioritise innovation and technology, overriding the protection of fundamental human rights.

On an EU level, the Artificial Intelligence Act (AI Act), officially known as Regulation (EU) 2024/1689, was published on 12 July 2024 and was enforced for all member states on the 1 August 2024. This landmark regulation aims to harmonise AI-related laws across EU member states, including Greece. The AI Act, influenced by product safety regulations, introduces a risk-based approach by classifying AI systems, based on their potential impact on fundamental rights.

On one hand, Article 5 of the AI Act, which became enforceable on 2 February 2025, explicitly prohibits (only under certain conditions, though, not entirely) certain AI systems that violate fundamental human rights, such as personal autonomy and free will, equality and non-discrimination, presumption of innocence, human dignity, as well as excessive, non-consensual and invasive personal data processing.

On the other hand, pursuant to Article 27 of the AI Act, prior to deploying a high-risk AI system (AI systems intended for creditworthiness assessment, risk evaluation, and pricing of life and health insurance are exempted), deployers that are bodies governed by public law, or are private entities providing public services (which means that private entities are excluded from the scope of Article 27), are required to conduct a Fundamental Rights Impact Assessment (FRIA), regarding the impact on fundamental rights that the use of such system may produce.

Additionally, the AI Act provides that if a DPIA has already been conducted, the FRIA will complement that assessment regarding the protection of personal data. This should not be interpreted as a replacement of the FRIA. Rather, DPIA elements should be included in the FRIA, as the FRIA serves a broader protection and compliance purpose – assessing the impact of AI systems on fundamental human rights and documenting required compliance – compared to the DPIA, which focuses on protecting personal data and managing associated processing risks. So, when it comes to personal data issues in AI systems, the GDPR and the DPIAs still remain as the main tools for protection.

Moreover, the AI Act requires member states to appoint national authorities responsible for overseeing compliance with the provisions regarding the high-risk AI systems and the FRIAs, as well as the Market Surveillance Authority, which could be one umbrella-authority or the responsibilities and powers could be split among more authorities.

In Greece, on November 12th 2024, the Ministry of Digital Governance appointed four national authorities, the Greek Ombudsman, the Greek National Commission for Human Rights, the Hellenic Data Protection Authority and the Hellenic Authority for Communication Security and Privacy, empowered to enforce or supervise compliance with EU obligations to protect citizens’ fundamental rights, including the right to non-discrimination, when using high-risk AI systems, pursuant to Article 27 of the AI Act. These authorities retain their existing powers, competences and independence, but as of August 2nd, 2026, and onwards, they will acquire additional powers, such as access to any documentation created or held by an organisation for its compliance with the AI Act, where this is necessary for the effective fulfilment of their mission. Regarding the Market Surveillance Authority, member states, and Greece, have a deadline until August 2nd 2025 to designate the Market Surveillance Authority, so it is still pretty unclear which authority or authorities will assume this role in Greece.

The AI Act is undoubtedly the peak of EU’s efforts to establish itself as a frontrunner in AI regulation and enter the AI production field, as opposed to the USA and China, which have already been leading AI innovation and production up until now. Τhe protection of fundamental human rights is one of the core objectives of the EU and so this is reflected in the AI Act, which provides certain safeguards. But even though it may seem a strict regulation, in truth it is a product safety regulation. This means that as long as AI providers and deployers comply with the AI Act’s checklists and guidelines, innovation can proceed without restrictions.

Concluding, even though there is no doubt that incorporating AI into different sectors brings major benefits and opportunities, it may also present challenges to fundamental human rights, such as data privacy, as well as specific obligations and requirements pursuant to the AI Act. Greece wants to be at the forefront and be aligned with European Union goals and regulations. Furthermore, Greece has been selected by the European High Performance Computing Joint Undertaking (EuroHPC) as one of the member states where the first seven AI Factories will operate, namely the PHAROS AI Factory. AI Factories aim to foster innovation, collaboration, and development in the field of AI and these AI Factories are expected to begin operations in April 2025.

The AI Act marks a new era and it’s important for the entities operating in Greece to stay informed about the relevant regulatory and game-changing issues, to ensure compliance and gain the trust of consumers, clients and partners. As AI continues to evolve, so will the legal frameworks governing its use, making it essential for stakeholders to actively participate in ongoing policy discussions.