Project to build trust in autonomous machines
Edinburgh researchers are to play a leading role in scrutinising how systems that put machines in charge of making decisions can be made trustworthy and responsible.
The research will develop recommendations for making autonomous technologies more answerable to people in contexts such as health, public services and finance.
The project is part of the UKRI Trustworthy Autonomous Systems (TAS) programme to examine the scope of autonomous technologies - which include automated software algorithms, robots and aircraft autopilots.
The UKRI programme has invested £33million to address some of the questions around autonomous systems, including whether they are safe, reliable, ethical and trustworthy.
The multidisciplinary team from Edinburgh will explore how to establish responsibility for actions and decisions taken by autonomous technologies, by ensuring their answerability to people.
This research will aid the design of technologies that operate ethically and legally while benefiting society.
The project will explore the issue of responsibility gaps – a problem that arises when a machine or software agent is unable to bear responsibility for its actions and their outcomes.
The research will develop guidance for practitioners with recommendations for bridging responsibility gaps by making autonomous systems more answerable to people in workplace, healthcare and financial settings.
Professor Shannon Vallor, who holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence in the School of Philosophy, Psychology and Language Sciences and is Director of the Centre for Technomoral Futures at the Edinburgh Futures Institute, is leading the research team.
The project brings together expertise from across the University and includes Dr Tillmann Vierkant of the School of Philosophy, Psychology and Language Sciences, Professor Michael Rovatsos and Dr Nadin Kokciyan of the School of Informatics and Dr Nayha Sethi of the Centre for Biomedicine, Self and Society.
Other partners include the Scottish Government’s Digital Directorate, the NHS Artificial Intelligence Laboratory (AI Lab) and software analytics specialists SAS.
Holding one another responsible for our actions is a pillar of social trust. A vital challenge in today’s world is ensuring that autonomous systems strengthen rather than weaken that trust. We are thrilled to launch this innovative multidisciplinary collaboration, which interweaves philosophical, legal and computational approaches to responsibility to enable the design of autonomous systems that can be made more answerable to the people who rely on them.
The Trustworthy Autonomous System is funded through the UKRI Strategic Priorities Fund, and delivered by the Engineering and Physical Sciences Research Council (EPSRC).
The 30 month project, Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems, is backed by a grant award of over £559,000 and will begin in January 2022.