# Will Trespassers Be Prosecuted? A Review of Privacy Rights in the Age of AI
**Date de l'événement :** 02/09/2025
* Publié le 02/09/2025

### Date
07/04/2025

### Auteur
**[Shefali Mehra](https://conference.sciencespo.fr/structure/shefali-mehra_fK7JbnnWeWpK14hTgww3)** 


## Chapô
**The article critically examines the evolving landscape of privacy rights amid the rise of digital technologies such as Artificial Intelligence (AI). The issue is framed not merely as regulatory or legal, but as civilizational and philosophical. AI technologies blur boundaries between public and private, individual and system, intention and inference. Consent, once a cornerstone of data protection, is now routinely undermined by dark patterns, datafication, and inference-based profiling.**

## Corps du texte
Historically, the idea of privacy emerged through different trajectories. Ancient Greek philosophy, notably Aristotle, distinguished between the oikos (household/private) and polis (public). Roman legal innovations offered mechanisms to redress personal intrusions, and this legacy continues in the “home as castle” principle.

In the modern era, the landmark 1890 essay by Warren and Brandeis redefined privacy as “the right to be let alone.” Post-WWII international declarations like the UDHR and ICCPR positioned privacy as a universal human right. However, these definitions struggle to contain the vast implications of AI. Privacy is not a static right but a moving target shaped by social, technical, and political forces. The proliferation of AI exacerbates existing tensions— between state surveillance and autonomy, economic power and personal control, and between individuals and systemic inference engines. The stakes are further heightened in the Global South, where infrastructural and literacy divides render privacy notices illegible and consent mechanisms coercive.

Methodology
-----------

The investigation adopts a multidisciplinary method across four dimensions:

1\. Philosophical analysis: Diverse intellectual traditions are used to examine what privacy means and why it matters. These include liberal theories of autonomy (Locke), utilitarianism’s cost-benefit logic (Bentham), Foucauldian critiques of surveillance and internalized discipline, and Marxist accounts of privacy as collective resistance to commodification. Special attention is given to Leonhard Menges’ “Deep Self View,” which links privacy to the moral integrity of a person’s stable values.

2\. Legal analysis: This dimension analyzes privacy laws across jurisdictions. The GDPR serves as a reference point, especially for its emphasis on consent, purpose limitation, and data minimization. It also references international norms.

3\. Technical analysis: These include the erosion of purpose limitation, expansion of secondary data uses, and the opacity of algorithmic decision-making. It highlights how AI challenges legal distinctions between personal and non-personal data through inference, and how Privacy-Enhancing Technologies (PETs) like differential privacy and federated learning aim to mitigate harms.

4\. Empirical user study: A small-scale user study was conducted. It evaluates engagement with two privacy notices: a standard (Version A) and an enhanced version (Version B). Participants assessed clarity, optout visibility, and trust. Demographically based in India, the sample included users familiar with privacy norms but fatigued by digital consent experiences. The method integrates normative critique, comparative legal research and technical risk mapping to triangulate, and empirical observation to arrive at a holistic understanding of privacy rights in AI contexts.

Results
-------

Across all dimensions, a common conclusion emerges: consent-based privacy frameworks are insufficient for the opaque, inferential, and behavioral nature of AI data processing:

1\. Philosophical analysis reveals that privacy cannot be reduced to data ownership or control.

2\. Legally, consent-based models are poorly equipped to regulate inferred or secondary uses of data.

3\. Technically, AI undermines core data protection principles. Purpose limitation is diluted when AI repurposes data across domains. AI systems generate high-stakes inferences from seemingly innocuous data, eroding user control even in anonymized environments.

4\. Empirically, the user study confirms the persistence of the privacy paradox. Despite familiarity with privacy laws, users often skip reading notices due to fatigue, complexity, or coercive designs. In Version A, 81.8% could not find a clear opt-out option. In Version B, 63.6% stated they would not have accepted in a real scenario, but 72.7% still found the decline option unclear. Design changes improve comprehension but do not fully overcome systemic challenges. The study also revealed qualitative concerns: users preferred visual cues, plain language, and modular options. Some trusted large firms by default.

The implications for the Global South are significant. Limited connectivity, lower digital literacy, and reliance on mobile devices intensify the gap between privacy as a right and as a lived reality.

Recommendations
---------------

Privacy in AI societies must be reimagined not as the individual’s burden to manage, but as a collective value requiring structural, technical, and ethical realignment. It is not just about data, but about power, dignity, and justice in a digitized world.

Privacy must be treated as a dignity-based right grounded in moral personhood, not just informational control. Normative theories like the Deep Self View offer inclusive frameworks that accommodate vulnerable populations.

Redesigning of privacy notices to include layered structures, visual cues, granular settings, and transparency dashboards is a technical imperative and using PETs like differential privacy, federated learning, and encryption could minimize data exposure by design. There is a need to shift from user-centric to structural safeguards, create international privacy task forces with regulators, technologists, and civil society to monitor AI impacts, and leverage Track 1.5 and South-South coalitions to establish equity-driven frameworks.

Appendix – survey
-----------------

A short survey was administered to assess how users engage with different privacy notice designs. Participants viewed two real-world privacy notices. After viewing each notice, participants completed a brief quiz assessing their understanding of the data practices described.

These results support existing literature that emphasizes the importance of clear, user-centric privacy designs in enabling informed consent and meaningful digital autonomy.

> _This article was initially published in a [special issue on Artificial Intelligence](https://drive.google.com/file/d/1FCBUbbMOXZatYq8BU3MH5P8YgV3SwQyB/view) from the Sciences Po Student Works and Papers Collection. It draws from multiple master’s programmes and builds on the Sciences Po Student Conference, “Can AI benefit democracy?”, held on 21 February 2025 with students from the Sciences Po School of Public Affairs, Law School, School of International Affairs, and School of Management and Impact. Despite different disciplinary backgrounds, student perspectives converged around three core themes — regulation, inequality, and citizenship & trust — echoing topics from the Intergovernmental AI Action Summit held earlier that month in Paris._

### Thématique
`#Numérique` 

**Langue :** `#Français` 



---
### Navigation pour IA
- [Index de tous les contenus](https://conference.sciencespo.fr/llms.txt)
- [Plan du site (Sitemap)](https://conference.sciencespo.fr/sitemap.xml)
- [Retour à l'accueil](https://conference.sciencespo.fr/)
