Professor Shannon Vallor

Professor

Background

Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy. She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme, funded by the Arts and Humanities Research Council. Professor Vallor's research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices. Her work includes advising policymakers and industry on the ethical design and use of AI. She is a standing member of the One Hundred Year Study of Artificial Intelligence (AI100) and a member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network and the 2022 Covey Award from the International Association of Computing and Philosophy. She is a former Visiting Researcher and AI Ethicist at Google. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024).

CV

PDF icon 109091.pdf

Responsibilities & affiliations

Director, Centre for Technomoral Futures (Edinburgh Futures Institute) www.technomoralfutures.uk

Co-Director, BRAID (Bridging Responsible AI Divides) www.braiduk.org

Undergraduate teaching

Ethics and Politics of Data (EFIE08004)

Postgraduate teaching

Ethics of Artificial Intelligence (PHIL11186)

Philosophy and Engineering (PGEE11205)

Ethics of Robotics and Autonomous Systems (EFIE11163)

Ethical Data Futures (EFIE11027)

Current PhD students supervised

Denisea Kennedy-Fernandez

Alexander Mussgnug

Yuxin Liu

Bhargavi Ganesh

Mara Neijzen

 

Past PhD students supervised

Beba Cibralic (external, Georgetown University)

Research summary

Ethics of Artificial Intelligence and Robotics, Data Ethics, Ethics of Automation, Intercultural Digital Ethics, Applied Virtue Ethics, Philosophy of Science

Past research interests

Classical phenomenology (Husserl, Merleau-Ponty), philosophy of mind and language

Affiliated research centres

Project activity

BRAID: Bridging Responsible AI Divides

UKRI/AHRC

Principal Investigator and Co-Director, with Prof Ewa Luger (Edinburgh College of Art)

BRAID is a 3-year national research programme funded by the UKRI Arts and Humanities Research Council (AHRC), led by the University of Edinburgh in partnership with the Ada Lovelace Institute and the BBC. BRAID is dedicated to integrating Arts, Humanities and Social Science research more fully into the Responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI.

Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems

UKRI/EPSRC Trustworthy Autonomous Systems Programme

Principal Investigator, with co-Investigators Nadin Kokciyan (Informatics), Michael Rovatsos (Informatics), Nayha Sethi (Usher Institute), Tillman Vierkant (Philosophy)

As computing systems become increasingly autonomous--able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans--it is vital that humans can confidently assess and ensure their trustworthiness. Our project develops a novel, people-centred approach to overcoming a major obstacle to this, known as responsibility gaps. Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.

Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps. Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability.

When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law.

Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps.

Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.

Current project grants

UKRI BRAID (Bridging Responsible AI Divides) Programme (Principal Investigator) 2022-2025 https://www.braiduk.org
UKRI Trustworthy Autonomous Systems Programme, Responsibility Node (Principal Investigator), 2022-2024 https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/W011654/1
UKRI Trustworthy Autonomous Systems Programme, Governance and Regulation Node (co-Investigator), 2020-2024 https://governance.tas.ac.uk/

Past project grants

Summer Institute in Technology Ethics (Principal Investigator), Templeton World Charity Foundation, 2020-2023 www.site2022.org