COMPASS 2026

Book of Abstracts

Comprehensive collection of research contributions and talk abstracts for COMPASS 2026.

Graph neural networks (GNNs) are effective for node classification when labels can leverage information from local neighborhoods. However, they can struggle when prediction depends on long-range interactions, due to well-known problems such as oversquashing. To address this issue, prior work has proposed rewiring the graph topology to improve signal propagation. In this work, we introduce RAwR, a novel and efficient rewiring method that creates a quotient graph from an equitable partition and connects it to the input graph. This enables faster communication between nodes with the same structural role -- i.e., the same Weisfeiler-Leman graph coloring -- and reduces the total effective resistance. Furthermore, an approximate definition of the equitable partition allows for controllable shrinking of the quotient graph until it collapses to a single node, thereby recovering the well-known Master Node rewiring technique. Across a broad evaluation benchmark, including standard homophilic and heterophilic datasets as well as synthetic graphs specifically designed for long-range interactions, RAwR achieves state-of-the-art results. We also analytically investigate the improvements that RAwR can achieve in an idealized teacher-student model of linear GNNs, explaining when and why role-based rewiring helps. This theoretical insight leads to the definition of Spectral Role Lift (SRL), a measure useful for identifying the approximate equitable partition that leads to the best performance.

Feature attribution is the dominant paradigm for explaining the predictions of complex machine learning models like neural networks. However, most existing methods offer little guarantee of reflecting the model's prediction-making process. We define the notion of explanatory alignment and argue that it is central to trustworthy predictive modeling: in short, it requires that explanations directly underlie predictions rather than serve as rationalizations. We present model readability as a design principle enabling alignment, and Pointwise-interpretable Networks (PiNets) as a modeling framework to pursue it in a deep learning context. PiNets combine statistical intelligence with a pseudo-linear structure that yields instance-wise linear predictions in an arbitrary feature space. We illustrate their use on image classification and segmentation tasks, demonstrating that PiNets produce explanations that are not only aligned by design but also faithful across other dimensions: meaningfulness, robustness, and sufficiency.

The presentation will argue that the rise of AI‑driven disinformation strategies fundamentally alters the relationship between freedom of expression and the rule of law by amplifying, accelerating, and obscuring manipulative political communication at an unprecedented scale. This development challenges the judiciary’s classical role as a guardian of free expression, raising the risk that interventions justified by opaque AI systems could be misused in contexts of democratic backsliding. Within this transformed landscape, EU instruments such as the Digital Services Act and the AI Act reconfigure the rule of law by imposing transparency, accountability, and risk‑mitigation duties on private actors whose AI technologies increasingly shape democratic discourse.

The advent of AI, particularly generative AI, raises numerous concerns regarding the protection of individuals most exposed to the risks arising from the pervasive use of new technologies across various aspects of daily life. The EU legislator has acknowledged these challenges, consequently mandating respect for the category of so-called "vulnerable" subjects. The discussion aims to address the concept of vulnerability in light of the AI Act, envisaging a governance model in which the protection of fragility becomes the fundamental benchmark for the very legitimacy of technological innovation.

The regulatory landscape in the European Union has seen two different trends in the past few years: after a wave of regulations which left us with the AI Act, the Digital Services Act, the Data Governance Act, the Data Act (just to name a few) and other landmark pieces of legislation, the new direction is going towards simplification, following the narrative that “regulation stifles innovation”. This talk will challenge this assumption and discuss responsible research practices to foster ethical innovation in the field of new technologies.

In a world pervaded by artificial intelligence, the law must maintain a predominant role in safeguarding human rights, interests, and legal certainty. However, it is increasingly difficult for regulation to keep pace with rapidly evolving technologies. A key issue concerns the interpretation of the AI Act, particularly Article 5(1)(a), which prohibits AI systems using manipulative, deceptive, or subliminal techniques that cause significant harm. Yet, the notion of “significant harm” is not defined, leaving interpreters to determine its meaning and increasing discretion and legal uncertainty. In a highly technological context, it is therefore crucial to identify the threshold beyond which harm becomes significant, in order to prevent prejudicial situations and ambiguities in enforcement. This requires analyzing European legislation on harm, its subcategories, and related concepts such as severity and legal violation. An interesting case study is that of voice-based virtual assistants, which use NLP and API techniques to provide timely responses to users. How might these systems manipulate or deceive users and lead to unconscious choices? And under what conditions could such conduct cause significant harm? This analysis aims to identify when such behaviors amount to manipulation, deception, or subliminal influence, providing guidance both ex ante for developers and ex post for affected users.

What kind of acting and, consequently, of responsibility arises in the context of artificial intelligence systems? On the one hand, an artificial intelligence system appears capable of an “acting without action” – that is, without a subject – which emerges in our very relationship with the system itself. This would call for a regime of moral responsibility different from that of fault tout court. Rather, by building upon the recognition of the vulnerabilities of the agents involved, a kind of dynamically negotiated responsibility would seem to arise. On the other hand, this appears to align well with the more general legal orientation, which tends toward liability without fault, or objective liability. Setting aside doctrinal nuances, there may be a correspondence between the two domains, namely the ethical and the legal one. Such an approach, which will be described during the speech, could do justice to already proposed solutions, such as the logging of interactions provided for in the EU Artificial Intelligence Act.

Innovation emerges from complex collaboration patterns - among inventors, firms, or institutions. However, not much is known about the overall mesoscopic structure around which inventive activity self-organizes. Here, we tackle this problem by employing patent data to analyze both individual (co-inventorship) and organization (co-ownership) networks in three strategic domains (artificial intelligence, biotechnology and semiconductors). We characterize the mesoscale structure (in terms of clusters) of each domain by comparing two alternative methods: a standard baseline - modularity maximization - and one based on the minimization of the Bayesian Information Criterion, within the Stochastic Block Model and its degree-corrected variant. We find that, across sectors, inventor networks are denser and more clustered than organization ones - consistent with the presence of small recurrent teams embedded into broader institutional hierarchies - whereas organization networks have neater hierarchical role-based structures, with few bridging firms coordinating the most peripheral ones. We also find that the discovered meso-structures are connected to innovation output. In particular, Lorenz curves of forward citations show a pervasive inequality in technological influence: across sectors and methods, both inventor (especially) and organization networks consistently show high levels of concentration of citations in a few of the discovered clusters. Our results demonstrate that the baseline modularity-based method may not be capable of fully capturing the way collaborations drive the spreading of inventive impact across technological domains. This is due to the presence of local hierarchies that call for more refined tools based on Bayesian inference.

Bipartite networks provide a fundamental insight into the organisation of complex real-world systems. A key challenge in modeling these systems is devising a monopartite projection that preserves the intricate information encoded within the original bipartite structure. We propose an unsupervised algorithm to obtain statistically validated projections of bipartite signed networks, according to which any two nodes sharing a statistically significant number of concordant (discordant) motifs are connected by a positive (negative) edge. By assessing statistical significance through four distinct Exponential Random Graph Models (ERGMs), we generate link-specific p-values filtered via multiple testing correction. After validating the method on synthetic configurations from a fully controllable generative model, we apply it to three real-world social networks. In all cases, the algorithm detects non-trivial mesoscopic structures that cannot be explained by the constraints of the null models, thus unveiling the authentic signed complexity of the underlying system. Finally, we show how the inherent flexibility of our framework allows for easy extensions to more sophisticated null models and different complex systems.