
When does a data protection impact assessment need to be carried out?
Every controller must carry out a data protection impact assessment when its intended processing operations are likely to result in a high risk of interference with the rights or freedoms of natural persons.
We present guidance on how to understand particular data processing situations that may lead to violations of individuals' rights. We provide examples of data operations that require a data protection impact assessment. We also present guidelines from the President of the Personal Data Protection Office and the European Data Protection Board to help understand the issue at hand. In addition, we address the issue of impact assessment in the context of artificial intelligence (AI) systems.
Introduction
The GDPR does not require a data protection impact assessment for every processing operation that the controller plans. The assessment is mandatory where such processing by its nature, scope, context and purposes - in particular if it is to be carried out using new technologies - is likely to result in a high risk of interference with the rights or freedoms of natural persons (Article 35(1) GDPR). The controller should analyse already at the design stage of a given processing operation whether it is subject to such an obligation so that the data protection impact assessment activities are undertaken in an appropriate time.
Remember:
The controller, before starting to process personal data, in practice must always carry out a preliminary assessment as to whether the planned operations are likely to present a high risk of infringement of the rights or freedoms of natural persons.
Article 35(3) of the GDPR gives examples of processing operations that are likely to present a high risk to the rights and freedoms of natural persons. It states that a data protection impact assessment is required in particular for:
a. systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person;
b. processing on a large scale of special categories of data referred to in Article 9(1), or of personal data relating to criminal convictions and offences referred to in Article 10; or
c. a systematic monitoring of a publicly accessible area on a large scale.
Announcement of the President of the Personal Data Protection Office
The examples given do not form a closed list and thus also other processing operations may pose a high risk of infringing the rights or freedoms of individuals. More concrete guidance is provided by the lists of types of processing operations subject to the requirement to perform a data protection impact assessment, which, pursuant to Article 35(4) of the GDPR, are established and made public by supervisory authorities. Such a list is annexed to the Communication of the President of the Personal Data Protection Office of 17 June 2019 on the list of types of personal data processing operations requiring an assessment of the effects of processing on their protection. The Announcement was announced in Monitor Polski on 8 July 2019 and is available at: http://monitorpolski.gov.pl/MP/2019/666.
The list complements and concretises the Article 29 Working Party's (WP 248) ‘Guidelines on Data Protection Impact Assessment and determining whether processing is "likely to result in a high risk" for the purposes of Regulation 2016/679’, which was endorsed by the European Data Protection Board (EDPB) on 25.05.2018. The guidelines introduce 9 criteria to be taken into account when assessing whether a data protection impact assessment should be carried out and accept that, in principle, to conditions must be adopted.
The list annexed to the Communication of the President of the Personal Data Protection Office contains 12 criteria of processing operations, together with examples of operations where there may be a high risk of violation of rights or freedoms and examples of potential areas covering these operations. These categories operations are:
1. evaluation or assessment, including profiling and prediction (behavioural analysis) for purposes that cause adverse legal, physical, financial or other inconvenience to individuals;
2. automated decision-making with legal, financial or similar material effects;
3. large-scale systematic monitoring of publicly accessible places using elements that recognise the characteristics or properties of objects that will be in the monitored space. This group of systems does not include video surveillance systems, in which images are recorded and used only in case of the need to analyse infringement incidents;
4. processing of special categories of personal data and data relating to criminal convictions and offences (sensitive data according to opinion WP 29);
5. processing of biometric data for the sole purpose of identifying an individual or for access control purposes;
6. processing of genetic data;
7. data processed on a large scale, where the concept of large scale refers to:
o the number of individuals whose data are processed,
o the extent of the processing,
o the duration of data retention, and
o the geographical scope of processing
8. making comparisons, assessments or inferences based on analysis of data obtained from different sources;
9. processing data on individuals whose assessment and the services provided to them are dependent on entities or individuals who have supervisory or assessment powers;
10. innovative use or application of technological or organisational solutions;
11. when the processing itself prevents data subjects from exercising a right or benefiting from a service or contract;
12. processing of location data.
How to use the list of operations?
As a general rule, processing that meets at least two of the indicated types/categories of operations will require a data protection impact assessment. The more categories a processing meets, the more likely it is that there is a high risk of infringement of data subjects' rights or freedoms and, consequently, a data protection impact assessment is required. However, the controller may consider that, due to its nature, scope, context and purposes, a processing meeting only one of the listed criteria will require a data protection impact assessment.
Example:
The implementation of a system for monitoring employees' working time and the flow of information in the tools they use (email, internet) will require a data protection impact assessment due to the fulfilment of criterion 3 and 9 of the list.
The list is not intended to be exhaustive. Further guidance for administrators may include further guidance or opinions of the EDPB, as well as decisions, guidance or clarifications by the President of the Personal Data Protection Office.
Examples:
The EDPB explained that ‘given the typical purposes of video surveillance (protection of persons and property, detection, prevention and control of crime, collection of evidence and biometric identification of suspects), it is reasonable to assume that in many cases involving video surveillance, a data protection impact assessment will be necessary.’[ii]
The EROD related that it is very likely that virtual voice assistant (VVA) services fall within the categories and conditions identified as requiring a data protection impact assessment[iii].
The obligation to carry out a data protection impact assessment may also arise directly from separate legislation.
Example:
Article 8 of Directive 2024/2831, according to which the processing of personal data by a digital labour platform by means of automated monitoring systems or automated decision-making systems is a type of processing which is likely to result in a high risk to the rights and freedoms of natural persons within the meaning of Article 35(1) of the GDPR[iv].
Doubts about the obligation to carry out a data protection impact assessment
The EDPB recommends that, if in doubt whether a data protection impact assessment is mandatory, it should be carried out[v]. This is because it is an instrument that makes it easier for controllers to comply with data protection legislation. The implementation of an effective data protection impact assessment procedure allows not only to minimise risks, but also to build the trust of data subjects (inter alia, it may allow the assessment of measures facilitating the exercise of their rights by data subjects, or the assessment of ensuring transparency of the processing of personal data). Therefore, it may be beneficial for the controller to carry out a data protection impact assessment also when it is not mandatory (voluntary data protection impact assessment).
Remember:
Whether the controller is obliged to carry out a data protection impact assessment or not, it is incumbent on the controller to carry out a proper risk assessment and risk management of its processing of personal data.
Who is to carry out the data protection impact assessment?
The EDPB noted that a data protection impact assessment can be useful to assess the impact of a technological product, such as hardware or software, which may be used in different ways by controllers to process personal data. This does not relieve the controller implementing that product from the obligation to carry out a data protection impact assessment, but it may make it easier for the controller to do so with the information obtained from the supplier. The EDPB emphasised that in such a situation, ‘any product provider or processor should share useful information without revealing secrets and without creating security risks by exposing vulnerabilities’[vi].
Remember:
The obligation to carry out a data protection impact assessment is incumbent on the controller. He or she cannot cede this obligation to the Data Protection Officer, but should consult him or her in this regard.
Consultation of data subjects
Article 35(9) GDPR requires the controller to consult data subjects or their representatives where appropriate on the intended processing, without prejudice to the protection of commercial or public interests or the security of the processing operation. The implementation of this obligation may be particularly relevant in the context of the development and implementation of artificial intelligence (AI) systems. The nature of the associated risks to the rights and freedoms of individuals may require the planned processing operations to be embedded in a broader societal context.
AI and the risk of infringement of individuals' rights or freedoms
With the rapid development of technologies such as artificial intelligence (AI) and their evolving applications, data protection impact assessments are becoming a key element of the data protection regime.
Remember:
The Artificial Intelligence Act[vii] defines an AI system as ‘a machine system that is designed to operate with varying levels of autonomy once it is deployed, and that may exhibit adaptability once it is deployed, and that, for explicit or implicit purposes, infers how to generate, based on received inputs, outputs such as predictions, content, recommendations or decisions that may affect the physical or virtual environment’ (Article 3(1)).
Recital 97 of the AI Act, referring to AI models, explains that although they are essential elements of AI systems, they do not constitute them in themselves. This is because they would require the addition of additional elements, such as a user interface. Consequently, AI models are usually integrated into AI systems, being a component of them.
At various stages of the life cycle (development and implementation) of AI systems, the processing of personal data may occur.
Example:
EDPB responded to the question of whether the final AI model that was taught on personal data is anonymous? The EDPB noted that some AI models are deliberately designed to provide or in some way allow access to information about the individuals whose personal data was used to train the model. Such AI models will inherently (and usually necessarily) contain information about an identified or identifiable natural person and will therefore involve the processing of personal data. Therefore, they cannot be considered anonymous. In EDPB’s view, even if an AI model is not intentionally designed to generate such information from training data, but information concerning identified or identifiable natural persons whose personal data was used to train the model can be obtained from the AI model by means that are reasonably likely to be used, such model is not anonymous. Consequently, AI models trained on personal data cannot in all cases be considered anonymous. The determination of whether an AI model is anonymous should be assessed on a case-by-case basis[viii].
Successive phases of the life cycle of AI systems may involve the processing of personal data for a variety of purposes and by a variety of means, which may give rise to different risks for data subjects. Consequently, a data protection impact assessment may need to be conducted at various stages of AI system development or implementation. In such situations, controllers should appropriately assess whether the planned data processing operations meet the criteria listed in Article 35(3) of the GDPR or the list annexed to the Communication of the President of the Personal Data Protection Office.
In the development phase of artificial intelligence systems, an important element is the creation of a training dataset and its further use. If this dataset is to contain personal data, it is particularly worth considering the following criteria from the list when assessing whether a data protection impact assessment should be carried out:
- - Processing of special categories of personal data including analysis of ‘behavioural data’ and data relating to criminal convictions and offences (4);
- - Processing of biometric data for the sole purpose of identifying an individual or for access control (5);
- - Processing of genetic data (6);
- - Data processed on a large scale (7);
- - Making comparisons, evaluations or inferences on the basis of analysis of data obtained from different sources (8);
- - Innovative use or application of technological or organisational solutions (10).
Remember:
The need for a data protection impact assessment should also be carried out if the creation of a training dataset is likely to significantly affect individuals, in particular where the processing of data is likely to lead to discrimination, where there may be quite a breach of data security, or misuse of data.
Entities using AI systems, when assessing the need for a data protection impact assessment, should in particular take into account the following criteria:
- Evaluation or assessment, including profiling and prediction (behavioural analysis) for purposes that cause adverse legal, physical, financial or other inconvenience to individuals (1);
- Automated decision-making with legal, financial or similar relevant consequences (2);
- Large-scale systematic monitoring of publicly accessible sites (3);
- Large-scale data processing (7);
- Making comparisons, evaluations or inferences based on analysis of data obtained from different sources (8);
- When the processing itself prevents data subjects from exercising a right or enjoying a service or contract (9);
- Innovative use or application of technological or organisational solutions (10).
Examples:
The assessment of creditworthiness, using artificial intelligence algorithms, covered by the secrecy obligation will require a data protection impact assessment due to the fulfilment of criteria: 1, 2, 4, 8, 9, 11 of the list.
Systems for profiling customers of transport services and dynamic pricing based on the profile will require a data protection impact assessment due to the fulfilment of criteria: 1, 2, 10, 11, 12 of the list.
Monitoring systems used for traffic management, allowing detailed surveillance of the driver and his/her driving behaviour, in particular systems allowing automatic identification of vehicles will require a data protection impact assessment due to the fulfilment of criteria: 3 and 10 of the list.
The teaching hospital plans to implement a tool on artificial intelligence to support diagnostic imaging, which recognises lesions and creates descriptions of examinations. The planned data processing operations will require a data protection impact assessment due to criteria: 4, 7, 10 of the list.
As a general rule, it can be assumed that the processing of data for the purpose of developing and implementing high-risk artificial intelligence systems in the sense of the AI Act is likely to result in a high risk of infringement of the rights or freedoms of natural persons and thus requires a data protection impact assessment.
An entity using a high-risk artificial intelligence system in carrying out a data protection impact assessment will be able to use the information provided by the provider of such a system in accordance with Article 13 of the Artificial Intelligence Act[ix].
Additional information
Data controllers should regularly monitor new guidance and interpretations from supervisory authorities, including the EDPB, in order to adapt their activities accordingly to the requirements of the applicable legislation. The EDPB is working to ensure that the provisions of the GDPR and the Artificial Intelligence Act affecting, among other things, the implementation of obligations to conduct a data protection impact assessment are applied in a consistent manner.
In the context of the Data Protection Impact Assessment, it is additionally worthwhile to read the recording of the Conference ‘Risk Assessment and Data Protection’, which is available at: https://uodo.gov.pl/pl/138/3507.
[i]These are:
1. evaluation or scoring;
2. Automatic decision-making with a legal or similarly significant effect;
3. systematic monitoring;
4. sensitive data or data of a highly personal nature;
5. Data processed on a large scale;
6. matching or linking of data sets;
7. data relating to vulnerable data subjects;
8. innovative use or application of new technological or organisational solutions;
9. when the processing itself ‘prevents the data subjects from exercising a right or enjoying a service or contract.
[ii] Guidelines 3/2019 on processing of personal data through video devices, Version 2.0 Adopted on 29 January 2020. Page: 33. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-32019-processing-personal-data-through-video_en
[iii] This includes considering if the device may be observing monitoring or controlling data subjects or systematically monitoring at large scale as per Article 35(3)(c), use of “new technology”, or the processing of sensitive data and data concerning vulnerable data subjects. Guidelines 02/2021 on virtual voice assistants. Version 2.0, adopted on 7 July 2021. Page: 32. https://www.edpb.europa.eu/system/files/2021-07/edpb_guidelines_202102_on_vva_v2.0_adopted_en.pdf
[iv] Directive (EU) 2024/2831 of the European Parliament and of the Council of 23 October 2024 on improving working conditions in platform work, Official Journal of the European Union, 2024/2831 | 11.11.2024
[v] Guidance on data protection impact assessment and helping to determine whether processing >>is likely to present a high risk<< for the purposes of Regulation 2016/679, p. .9
[vi] Ibidem.
[vii] REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024. Laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), Official Journal of the European Union 2024/1689, 12.7.2024.
[viii] Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models Adopted on 17 December 2024, pages:11-14, https://www.edpb.europa.eu/system/files/2024-12/edpb_opinion_202428_ai-models_en.pdf
[ix] Article 13 of the Artificial Intelligence Act on transparency and the provision of information to applicators will be directly applicable from 2.08.2026.