Skip to content

Embrace Data Security and AI: Overcome Reluctance!

The potential effects of AI boom on data privacy remain uncertain. While Meta's AI training tallies hesitation from supervisory bodies, the Higher Regional Court of Cologne spitefully endorsed their efforts, temporarily at least. However, law professor Paulina Jo Pesch highlights flaws in the...

Embracing Data Privacy and AI: Overcome the Hesitation!
Embracing Data Privacy and AI: Overcome the Hesitation!

Embrace Data Security and AI: Overcome Reluctance!

Meta, the tech giant behind Instagram, Facebook, WhatsApp, and AI glasses, is continuing its AI training in Europe, despite growing concerns over data protection. The Cologne Higher Regional Court (OLG) has ruled against the Consumer Center North Rhine-Westphalia, who sought to prevent the start of the training.

The court's decision follows Meta's argument that it has a legitimate interest in using user data for AI training. However, the court did not address the Consumer Center's objection that not all data subjects can opt out, particularly non-users whose personal data is contained in Facebook and Instagram posts.

Meta informed users in June 2024 about an update to its privacy policy, revealing its plan to train AI models with user data. The company uses publicly available user-generated content for training its AI models, relying on the previous opt-out framework, which is currently under legal and regulatory scrutiny under new EU AI regulations.

The Irish data protection authority has advised Meta but has not prohibited the training, instead observing its effects. The Hamburg data protection officer initially announced proceedings against Meta but revised his view after gaining insights into Meta's training and expressing significant data protection concerns.

Meta's AI training involves not only text data but also images, videos, and audio files. The opt-out option for users to object to their data being used was difficult to find.

Prof. Dr. Paulina Jo Pesch, a Junior Professor for Civil Law, Law of Digitalization, Data Protection, and Artificial Intelligence, criticizes the approach of courts and supervisory authorities not enforcing AI compliance requirements. She coordinates the interdisciplinary research project SMARD-GOV, funded by the Federal Ministry of Research, Technology, and Space (BMFTR), which explores data protection aspects of large language models.

European data protection authorities are aware of potential violations but are initially allowing Meta to continue. This approach of "wait and see" in AI has been criticized by Prof. Dr. Pesch and others, who argue that stricter enforcement is necessary to protect the privacy of individuals.

Meta makes its models available to others for use, such as researchers and companies for AI services or products. This raises additional concerns about the use and misuse of AI models trained with personal data.

As the use of AI continues to grow, the debate over data protection and privacy will undoubtedly continue. It is crucial that companies like Meta comply with data protection regulations and that courts and supervisory authorities enforce these requirements to protect the rights of individuals.

Read also:

Latest