What kind of acting and, consequently, of responsibility arises in the context of artificial intelligence systems? On the one hand, an artificial intelligence system appears capable of an “acting without action” – that is, without a subject – which emerges in our very relationship with the system itself. This would call for a regime of moral responsibility different from that of fault tout court. Rather, by building upon the recognition of the vulnerabilities of the agents involved, a kind of dynamically negotiated responsibility would seem to arise. On the other hand, this appears to align well with the more general legal orientation, which tends toward liability without fault, or objective liability. Setting aside doctrinal nuances, there may be a correspondence between the two domains, namely the ethical and the legal one. Such an approach, which will be described during the speech, could do justice to already proposed solutions, such as the logging of interactions provided for in the EU Artificial Intelligence Act.
Elio Grande
University of Pisa
Elio Grande is an AI ethicist, now PhD Candidate in artificial intelligence at the University of Pisa. He obtained a master’s degree in moral philosophy, focused on ethics in cybernetics, and a postgraduate master in digital philosophy. His latest project concerns a double-mirror design of artificial intelligence systems, called the Endless Tuning, which allows adopting models of any depth without replacing the user and being able to trace back responsibilities in the context of decision-making. Apart from scientific research, he collaborated with the website of Fondazione Leonardo - Civiltà delle Macchine, publishing several educational articles, and has organised public talks about artificial intelligence and society. His endeavours are aimed at spreading the ethics of artificial intelligence as a policy advisor, in order to help to build fair digital ecosystems.