Significant Harm in EU Law: When Voice-Based Virtual Assistants Are Prohibited
In a world pervaded by artificial intelligence, the law must maintain a predominant role in safeguarding human rights, interests, and legal certainty. However, it is increasingly difficult for regulation to keep pace with rapidly evolving technologies. A key issue concerns the interpretation of the AI Act, particularly Article 5(1)(a), which prohibits AI systems using manipulative, deceptive, or subliminal techniques that cause significant harm. Yet, the notion of “significant harm” is not defined, leaving interpreters to determine its meaning and increasing discretion and legal uncertainty. In a highly technological context, it is therefore crucial to identify the threshold beyond which harm becomes significant, in order to prevent prejudicial situations and ambiguities in enforcement. This requires analyzing European legislation on harm, its subcategories, and related concepts such as severity and legal violation. An interesting case study is that of voice-based virtual assistants, which use NLP and API techniques to provide timely responses to users. How might these systems manipulate or deceive users and lead to unconscious choices? And under what conditions could such conduct cause significant harm? This analysis aims to identify when such behaviors amount to manipulation, deception, or subliminal influence, providing guidance both ex ante for developers and ex post for affected users.