In a world pervaded by artificial intelligence, the law must maintain a predominant role in safeguarding human rights, interests, and legal certainty. However, it is increasingly difficult for regulation to keep pace with rapidly evolving technologies. A key issue concerns the interpretation of the AI Act, particularly Article 5(1)(a), which prohibits AI systems using manipulative, deceptive, or subliminal techniques that cause significant harm. Yet, the notion of “significant harm” is not defined, leaving interpreters to determine its meaning and increasing discretion and legal uncertainty. In a highly technological context, it is therefore crucial to identify the threshold beyond which harm becomes significant, in order to prevent prejudicial situations and ambiguities in enforcement. This requires analyzing European legislation on harm, its subcategories, and related concepts such as severity and legal violation. An interesting case study is that of voice-based virtual assistants, which use NLP and API techniques to provide timely responses to users. How might these systems manipulate or deceive users and lead to unconscious choices? And under what conditions could such conduct cause significant harm? This analysis aims to identify when such behaviors amount to manipulation, deception, or subliminal influence, providing guidance both ex ante for developers and ex post for affected users.
Significant Harm in EU Law: When Voice-Based Virtual Assistants Are Prohibited
- Talk detail
- 15:00
Vittoria Caponecchia
Sant’Anna School of Advanced Studies and University of Pisa
Vittoria Caponecchia is a PhD candidate in Artificial Intelligence for Society at the Sant’Anna School of Advanced Studies in Pisa. She holds a Law degree from the University of Florence and completed her legal traineeship at a law firm in Pistoia. Her doctoral research focuses on the classification of AI systems based on the concept of significant harm, with particular attention to consumer protection.