Artificial intelligence (AI) refers to systems designed to simulate human intelligence, enabling them to learn, solve problems, make decisions, and improve over time. Whether as virtual assistants or generative AI tools, these technologies are becoming integral to modern life. However, as AI’s reach expands, so do concerns about privacy and security. The capacity of AI to handle and analyze vast quantities of data raises critical questions about how personal information is used, stored, and protected within these systems.
More Data, More Privacy Risks
AI systems thrive on vast amounts of data, and this dependency adds a new layer of complexity to long-standing privacy concerns, especially around the collection and processing of personal information. While issues around transparency, security, and unauthorized data collection are not new, the scale and impact of AI’s capabilities, enabled by vast datasets and powerful computational resources, present more significant challenges.
Deleting Personal Data
Under the General Data Protection Regulation (GDPR) and various other privacy laws like the California Consumer Privacy Act (CCPA), organizations using AI models must ensure compliance with the right to erasure when individuals request the deletion of their personal data. This poses unique challenges for AI models, particularly large language models (LLMs), where personal information has been used for training or has been embedded into complex datasets. Once data is incorporated into an AI model, it becomes deeply embedded, making complete deletion nearly impossible. Retraining models with updated datasets can reduce the influence of older data, but achieving full compliance with deletion requests remains a major concern. Organizations with robust data governance practices can mitigate this risk by implementing structured processes to govern where and how data will be used. These systems help organizations locate, manage, and securely delete personal data while minimizing impact to AI model integrity.
Transparency in Decision-Making
AI tools are increasingly used to profile and make automated decisions about people, such as for customer identity verification or recruitment purposes. Several privacy laws around the world require transparency around such processes, while the GDPR provides one of the most comprehensive regulations for this type of decision-making. Under Article 22(1), individuals have the right to opt out of decisions made solely through automated processing, specifically when such decisions could result in legal consequences or significantly affect their rights and freedoms. Under the GDPR, data controllers are obligated to inform individuals about automated decision-making, clarify the underlying logic of these systems, and describe the potential outcomes. However, providing transparency becomes challenging when working with machine learning algorithms, as their decision-making may be too complex to explain in simple terms or self-evolve over time. It is noteworthy that individuals’ right to opt out from automated decision-making does not apply in all cases though. For example, if the decision-making is necessary for entering into or performing a contract between the individual and the data controller or if the individual provides their explicit consent.
Repurposing of Personal Data
AI raises concerns about the repurposing of personal data. This occurs when data collected for one purpose is later used for an entirely different, often unforeseen, purpose. Such misuse of data can lead to violations of data protection laws. Although regulations such as the GDPR provide some protection, they may not fully account for the complexities of AI systems. Organizations must ensure that they have a lawful basis for processing personal data, and that the data is used in alignment with its original purpose or with the informed consent of individuals.
Assessing and Mitigating Increased Risk
AI systems often involve increased risk to data subjects and their personal information, particularly since these systems may involve high-risk activities such as:
- Location tracking
- Behavior monitoring
- Systematic surveillance of publicly accessible spaces
- Processing of sensitive data
- Data processing of minors.
The GDPR and various other privacy laws require organizations to carry out data protection impact assessments (DPIAs) when carrying out processing that is likely to result in a high risk to individuals’ rights. The purpose of a DPIA is to evaluate potential impacts and develop strategies to mitigate any identified risks. In particular, a DPIA will be required under GDPR Article 35(3) if you intend on using AI that involves:
- systematic and extensive evaluation of personal aspects based on automated processing, which produce legal or similarly significant effects for individuals;
- large-scale processing of special categories of personal data; or
- systematic monitoring of publicly-accessible areas on a large scale.
But this is not an exhaustive list since AI can also involve other types of processing that are likely to result in a high risk to individuals. Therefore, whenever you intend to embark on any project involving an AI system, you should carefully consider whether a DPIA is required and, if it turns out that a DPIA is not needed, you should document your reasons for reaching this conclusion.
AI Security and Data Storage Risks
AI systems face unique security challenges, including vulnerabilities to cyberattacks, model manipulation, and data breaches, which can compromise personal data. Securing AI systems is critical for safeguarding individuals’ privacy. Beyond traditional IT security measures, organizations must adopt AI-specific safeguards, such as securing training datasets, monitoring model integrity, and implementing access controls tailored to AI systems. Regularly updating systems to address emerging risks, conducting thorough security audits, and incorporating encryption and anonymization techniques are also essential to mitigating risk. By integrating these measures, along with established IT security practices, organizations can better protect personal information and ensure the resilience of their AI systems.
AI and the Deepfake Threat
Deepfakes are realistic but fabricated images, videos, or audio clips that pose significant concerns in the field of AI. These digital creations can convincingly impersonate individuals, often with startling accuracy, leading to serious risks such as deception, identity theft, or reputational damage. While the EU AI Act and China’s Provisions on the Administration of Deep Synthesis Internet Information Services address deepfakes, legislation aimed at preventing the misuse of deepfakes is still lacking in most countries.
The Rise of LLMs
As companies develop faster, smarter, and more cost-effective large language models (LLMs), the volume of personal data collected and processed by these models continues to grow exponentially. This expansion raises critical concerns about compliance with data privacy regulations across jurisdictions. A notable example is the investigations into DeepSeek, a Chinese AI company, for alleged violations of data privacy laws. Italy’s data protection authority blocked DeepSeek’s AI service due to insufficient transparency regarding how individuals’ personal data is collected, stored, and processed. The Dutch data protection authority has also opened an investigation into DeepSeek’s data collection practices and the Irish Data Protection Commission has also written to DeepSeek to request more information about its data processing. These regulatory actions highlight the growing global scrutiny on AI companies’ data practices and the need for robust compliance measures.
Do Existing Laws Address AI Privacy Risks Effectively?
Existing privacy laws, including the GDPR and various U.S. state privacy laws, contain provisions that address AI data processing to some extent, but they may not fully address emerging risks. The rapid evolution of AI technologies calls for more targeted legal frameworks. Emerging regulations, such as the EU AI Act, aim to address issues like algorithmic transparency, data governance, and accountability. These frameworks will be critical in setting clear standards and addressing the unique privacy challenges posed by AI.
Preparing Your Organization for the Future
As AI continues to transform industries, organizations must balance innovation with privacy protection. The vast datasets required by AI systems challenge individuals’ ability to understand or control the use of their personal information. Staying informed about regulatory developments, ensuring governance and accountability, and addressing potential abuses are essential. The risk of bias, as well as the ability to generate deceptive or manipulative content, also underscores the need for ethical implementation of these systems.
Navigating the legal landscape around AI and emerging technologies doesn’t have to be a challenge. As your trusted privacy compliance partner, VeraSafe is here to help your organization navigate complex regulations such as the GDPR and EU AI Act. From conducting DPIAs to implementing robust data governance practices, we provide tailored solutions to support the responsible implementation of your AI systems. Schedule a free consultation to get started.
You may also like:
An Introduction to the EU AI Act
AI Governance: Why It Matters and How to Implement It Internally
CIPA vs. Chatbots: Can Websites Be Sued for Eavesdropping?
Related topics: AI, Compliance Tools and Advice