Bayes Centre

Philosophy professor secures £690k to lead on new AI research project

Professor Shannon Vallor to be principal investigator on a UKRI Trustworthy Autonomous Systems Programme: Responsibility Grant, funded by ESPRC

Professor Shannon Vallor who holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence and is Director of the Centre for Technomoral Futures at the Edinburgh Futures Institute, will be principal investigator on a UKRI Trustworthy Autonomous Systems Programme: Responsibility Grant, funded by EPSRC.

The project, “Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems”, is due to start in January 2022 for 30 months. It will develop a critical strand of the Trustworthy Autonomous Systems (TAS) Programme, exploring how to address the critical issue of ‘responsibility gaps’ and ensure trust in systems that are increasingly part of our daily life, our infrastructure and economy.

Holding one another responsible for our actions is a pillar of social trust. A vital challenge in today’s world is ensuring that autonomous systems strengthen rather than weaken that trust. We are thrilled to launch this innovative multidisciplinary collaboration, which interweaves philosophical, legal and computational approaches to responsibility to enable the design of autonomous systems that can be made more answerable to the people who rely on them.

Professor Shannon VallorBaillie Gifford Chair in the Ethics of Data and Artificial Intelligence

Practical, research-based guidance

Professor Vallor is leading a multidisciplinary research team from the University of Edinburgh. The project will have outputs such as guidance for practitioners with recommendations for making autonomous systems more answerable to people in contexts such as health, public services and finance.

The team brings together expertise from across the University and includes Dr Tillmann Vierkant, (Philosophy), Professor Michael Rovatsos (Informatics), Dr Nadin Kokciyan (Informatics) and Dr Nayha Sethi, (Centre for Biomedicine, Self and Society). Other partners collaborating in the research include the NHS AI Lab (NHSX), SAS, and Scotland’s Digital Directorate.

Autonomous systems in our lives

Autonomous systems, including self-driving vehicles, robots, autonomous warehouse and factory systems or drones, are increasingly part of our lives. Thanks to advances in artificial intelligence, robotics and networked devices, these systems are already part of domestic, workplace, healthcare and industrial settings.

As computing systems move into high-stakes autonomous operations – independently piloting vehicles, detecting fraudulent banking transactions, or reading and diagnosing our medical scans – it is vital that humans can confidently assess and ensure their trustworthiness. To do that, humans must be able to hold these systems – including the people and organisations operating them – responsible for their actions.

Taking responsibility

Responsibility gaps occur when we cannot identify the person or persons morally responsible for an action with high moral stakes. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust. Autonomous systems can exacerbate responsibility gaps.

This work draws develops new ways for autonomous system developers, users and regulators to bridge these responsibility gaps—by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability. The project draws on cognitive science, law and philosophy to understand the many ways that human agents can answer for actions, and uses AI expertise to translate this knowledge to autonomous systems.

Project outputs will include tools and guides for enhancing system answerability through dialogical design, scholarly publications that explore the philosophical, legal and technical dimensions of system answerability, and industry, regulatory and public sector events to help disseminate novel design techniques and frameworks for making autonomous systems more answerable to people.

UKRI Trustworthy Autonomous Systems (TAS) Programme

The £33 million UKRI TAS Programme brings together research communities and key stakeholders to drive forward cross-disciplinary, fundamental research to ensure that autonomous systems are safe, reliable, resilient, ethical and trusted.

The collaborative UK-based platform is comprised of Research Nodes and a Hub, united by the purpose of developing world-leading best practice for the design, regulation and operation of autonomous systems. The central aim of the programme is to ensure that the autonomous systems are ‘socially beneficial’, protect people’s personal freedoms and safeguard physical and mental wellbeing. The School of Informatics at the University of Edinburgh leads the Governance and Regulation Node.

The TAS Programme addresses public concern and potential risks associated with autonomous systems by making sure they are both trustworthy by design and trusted by those that use them, from individuals to society and government. It is only by addressing these issues of public concern and potential risk that autonomous systems will be trusted, allowing them to grow and be utilised more and more.