photo
27.03.2026

The President of the Personal Data Protection Office has notified the prosecutor's office regarding the modification of a teacher's picture

On social media, someone posted a photo of the teacher under her post that had been modified in such a way that she appeared naked in it. In a notice to the Warsaw-Śródmieście Regional Prosecutor's Office, the President of the Personal Data Protection Office stated that, in his opinion, there had been a breach of Article 107 of the Act of 10 May 2018 on the Protection of Personal Data.

The teacher whose image was manipulates wrote about the incident: “My photo was edited (undressed) using AI by some creepy guy and then posted online. I feel awful, I feel violated, and even telling myself that it wasn’t my body doesn’t help much.”

This is not the first time that the President of the Personal Data Protection Office, Mirosław Wróblewski, has intervened in a case involving so-called deepfakes. Previously, the President of the Personal Data Protection Office has, among other things, defended the rights of an elementary school student whose photo was modified by her classmates using AI tools and then circulated among their group (The President of the Personal Data Protection Office intervenes with the Prosecutor General regarding the generation of a nude photo of a student). As provided in Article 107(1) of the Act of 10 May 2018 on the Protection of Personal Data, anyone who processes personal data, even though such processing is not permitted or they are not authorised to process it, is subject to a fine, restriction of liberty, or imprisonment for up to two years.

Such an act constitutes a public offense, prosecuted ex officio and by public indictment. In such a case, unlawful data processing refers to the processing of data without a legal basis, where none of the grounds for lawful data processing specified in the GDPR (Articles 6 and 9) apply. In particular, the data subject’s consent to the processing of their personal data for one or more specific purposes has not been obtained.

Generating an image of the victim in the context in which the perpetrator did so is socially harmful and warrants a response from the relevant law enforcement authorities. The perpetrator’s actions not only violate the victim’s privacy but also pose a risk of infringing upon her fundamental interests. The reactions of internet users to the victim’s post, in which she drew attention to her problem, are also concerning.

It is also important to note the very nature of deepfakes and their exceptional harmfulness. To date, the Polish legal system has not established a legal definition describing this phenomenon, which necessitates reference to EU legislation. The definition of deepfakes that should be used is contained in Article 3(60) of the Artificial Intelligence Act (AI Act), according to which they are: “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful”.

In this case, the victim’s image was manipulated without her consent using publicly available photos and artificial intelligence tools. This phenomenon constitutes a distinct category of deepfakes, which has been described in the literature, particularly in the publication “Deepfakes and Sexual Abuse—The Law Facing the Challenges of Synthetic Media” (M. Łabuz, Dr. M. Małecki, Attorney Dr. K. Mania, Digital Democracy Observatory 2025), as non-consensual synthetic intimate imagery (NSII).

As the authors of the publication point out, “NSII is the predominant form of deepfake use in video—accounting for over 90% of such content. In nearly 100% of cases, NSII in the form of deepfakes targets women, leading to the victimisation of thousands of them worldwide, as well as contributing to the reinforcement of gender stereotypes and the objectification of women, exerting an impact on society that is difficult to quantify.”

DKN.5101.170.2026