FAccT2025 Practical Information Organizers Contact OHAAI

Tutorial - FAccT2025: Computational Argumentation for Fair and Explainable AI Decision-making

As AI systems increasingly influence high-stakes domains, from autonomous systems to healthcare and law, the need for transparency, interpretability, and accountability is more urgent than ever. This tutorial introduces Computational Argumentation as a powerful, interdisciplinary approach to meet these demands. Participants will gain foundational knowledge of argumentation theory, explore real-world applications (including a case study with the Dutch police), and learn how argumentative frameworks support human-AI joint reasoning. We will demonstrate how computational argumentation can be used to detect bias, explain decisions, and align AI systems with human values. Combining theory with hands-on tools, this interactive tutorial equips attendees with methods to critically assess both human and AI reasoning, and contribute to the development of fair, explainable, and trustworthy technologies.

Practical information

The tutorial will be held at the ACM Conference on Fairness, Accountability, and Transparency FAccT2025.

When?

June 23, 2025 at 16:15

Where?

Athens Conservatoire, Athens, Greece

How to register?

For registration, see FAccT2025 website.

Organizers

Elfia Bezou-Vrakatseli

Elfia Bezou-Vrakatseli (elfia.bezou_vrakatseli@kcl.ac.uk) is a PhD candidate in the UK Research and Innovation Centre for Doctoral Training in Safe and Trusted Artificial Intelligence at King's College London, where she is also the representative of the Equality, Diversity & Inclusion committee. She is an Affiliate of the King's Institute for Artificial Intelligence and currently taking part in The Alan Turing Institute's Enrichment Scheme. Her expertise is in argumentation and her research focuses on using argumentation tools to enhance the communication between humans and between humans and AI systems.

Elfia Bezou-Vrakatseli

Andreas Xydis

Andreas Xydis (AXydis@lincoln.ac.uk) is currently a Post-Doctoral Research Associate in Intelligent Systems at the Lincoln Institute for Agri-food Technology (LIAT) in the University of Lincoln, UK. His expertise is in argumentation and his research focuses on argumentation-based dialogues, non-monotonic reasoning and using argumentation tools for explainability purposes, aiming at bridging the gap between formal logic-based models of dialogue and communication as witnessed in real-world dialogue, enhancing the communication between humans and/or AI systems as well as the trustworthiness of humans towards AI systems.

Madeleine Waller

Madeleine Waller

Madeleine Waller (madeleine.waller@kcl.ac.uk) is a PhD candidate in the UK Research and Innovation (UKRI) Centre for Doctoral Training in Safe and Trusted Artificial Intelligence at King's College London. She is an Affiliate of the King's Institute for Artificial Intelligence and currently undertaking a UKRI Policy Internship at the Welsh Parliament. Her research and expertise lie in the connection between fairness and explainablity of AI systems, specifically using argumentation-based approaches.

Madeleine Waller

Contact Us

If you have any questions regarding OHAAI, please contact us at:

ohaaiproject@gmail.com