Group #1. TLI – Technology, Law and the Imaginary
From Literature to Videogames
Research Group Coordinator: Prof. Giovanni Ziccardi
This research line explores the deep entanglement between technology, law and the cultural imagination, positioning videogames, literature, cinema and television series as privileged lenses through which to interpret contemporary society.
The project is grounded in the idea that technological transformation cannot be fully understood through technical or legal analysis alone. Instead, it requires a broader cultural perspective: one capable of tracing how narratives, symbols and imaginaries anticipate, shape and critique the evolution of law, power and human agency in the digital age.
At the core of this initiative lies an interdisciplinary dialogue between law, media studies, literary criticism and technology studies. By bringing these domains together, the research aims to investigate how cultural artefacts both reflect and influence legal frameworks, social norms and collective perceptions of technological change.
Group #2. GGH – Geopolitics, Global Affairs and Human Rights
From Digital Geopolitics to Human Rights Protection
Research Group Coordinators: Dr. Paulina Kowalicka, Dr. Gabriele Suffia
This research line focuses on the profound transformation of geopolitics and global affairs under the pressure of technological acceleration, digital interdependence, and systemic crises.
It analyses how power, sovereignty and influence are reshaped by data flows, digital infrastructures, artificial intelligence, cyber operations, and information warfare, with particular regard to their impact on fundamental rights and democratic institutions.
The research investigates the legal and ethical tensions between security, innovation, and human rights in areas such as surveillance, border technologies, autonomous systems, and digital authoritarianism.
A central concern is the protection of human dignity in a fragmented global order, where international law, humanitarian principles, and human rights frameworks are increasingly challenged by asymmetric conflicts, hybrid warfare, and algorithmic decision-making.
The line adopts a global perspective, integrating legal analysis with political theory and international relations, while paying close attention to vulnerable regions and populations disproportionately affected by technological power imbalances.
Its innovative contribution lies in bridging classical human rights discourse with emerging forms of technological domination, proposing normative frameworks capable of responding to twenty-first-century geopolitical realities.
Group #3. AH – AI and Health
From Data-driven Medicine to Rights-based Governance of Health AI
Research group coordinator: Dr. Maria Grazia Peluso, Dr. Malwina A. Wójcik-Suffia
The AI and Health research line examines the transformative role of artificial intelligence in healthcare systems, biomedical research, and public health governance.
It addresses the legal, ethical, and social implications of deploying AI-driven tools in diagnosis, prognosis, personalised medicine, and clinical decision support, with a particular focus on accountability, transparency, patient autonomy and the protection of fundamental rights of patients.
Firstly, this area of research explores the redistribution of responsibilities among physicians, healthcare institutions and AI developers, examining the evolving regulatory models for health AI under the Medical Device Regulation and the AI Act, as well as the medical liability models for AI-mediated harms under the revised Product Liability Directive.
Secondly, the research line investigates health data governance, including issues of consent, protection of sensitive information and biases in the context of secondary use of data under the European Health Data Space.
Thirdly, the research area explores issues relating to effective and equitable implementation of AI-based technologies into healthcare systems, including Health Technology Assessment. Special attention is devoted to the impact of AI on health inequalities, access to care, and the risk of technological exclusion.
These topics are examined in an interdisciplinary and comparative lens, encompassing non-European ethical and regulatory perspectives. In this way, this research line contributes to the ongoing international dialogue on trustworthy, human-centred AI in healthcare, capable of enhancing innovation while safeguarding fundamental rights and public trust.
Group #4. NTVG – New Technologies and Vulnerable Groups
From Technological Vulnerability to Human Empowerment
Research Group Coordinator: Dr. Arianna Arini, Dr. Giulia Pesci, Dr. Samanta Stanco
This research line examines the relationship between emerging technologies and conditions of vulnerability, adopting an interdisciplinary and critical perspective that bridges legal, cognitive and socio-technical dimensions. It focuses both on structurally vulnerable groups—such as children, the elderly, persons with disabilities, migrants and economically disadvantaged communities—and on emerging forms of cognitive vulnerability generated by the pervasive use of digital platforms and generative AI systems.
The research investigates how automated decision-making, biometric technologies and AI-mediated environments can amplify existing inequalities while simultaneously reshaping human cognition, judgement and autonomy. Particular attention is devoted to phenomena such as cognitive delegation, epistemic fragility, over-reliance on AI systems and the erosion of critical thinking, especially in contexts involving minors and non-expert users.
From a normative perspective, the line addresses structural bias, algorithmic opacity and the weakening of effective legal remedies, while exploring the adequacy of current regulatory frameworks (including data protection and AI governance) in protecting both social and cognitive integrity. At the same time, it develops vulnerability-aware regulatory models, rethinks concepts such as consent and capacity in AI-mediated environments, and promotes the integration of human rights and cognitive impact assessments into technological design.
The ultimate aim is to contribute to a legal framework in which technological innovation does not reinforce existing inequalities or cognitive dependencies, but instead supports empowerment, inclusion and the preservation of human autonomy.
Group #5. AJ – The Augmented Jurist
From Jurist to Augmented Jurist
Research Group Coordinators: Dr. Simone Bonavita, Prof. Pierluigi Perri, Dr. Desideria Pollak
The research line on the Augmented Jurist is devoted to rethinking the identity, skills, and responsibilities of legal professionals in an era of pervasive digitalisation and artificial intelligence.
It explores how technologies such as generative AI, legal analytics, automation, and decision-support systems are reshaping legal reasoning, professional judgment, and institutional practices.
Rather than framing technology as a substitute for human expertise, this line conceptualises augmentation as a critical partnership between human intelligence and computational tools.
The research investigates new forms of legal knowledge, hybrid competences, and ethical responsibilities arising from this transformation, as well as the risks of cognitive delegation, deskilling, and over-reliance on automated outputs.
It also addresses the impact of technology on legal education, access to justice, and the balance of power between legal professionals, clients, and technological intermediaries.
By articulating a rigorous and reflective model of the augmented jurist, this research line contributes to shaping a future-oriented legal culture that is technologically literate, ethically grounded, and socially responsible.
