The research will develop recommendations for making autonomous technologies more answerable to people in contexts such as health, public services and finance.
The project is part of the UKRI Trustworthy Autonomous Systems (TAS) programme to examine the scope of autonomous technologies - which include automated software algorithms, robots and aircraft autopilots.
Reliable systems
The UKRI programme has invested £33million to address some of the questions around autonomous systems, including whether they are safe, reliable, ethical and trustworthy.
The multidisciplinary team from Edinburgh will explore how to establish responsibility for actions and decisions taken by autonomous technologies, by ensuring their answerability to people.
This research will aid the design of technologies that operate ethically and legally while benefiting society.
Exploring gaps
The project will explore the issue of responsibility gaps – a problem that arises when a machine or software agent is unable to bear responsibility for its actions and their outcomes.
The research will develop guidance for practitioners with recommendations for bridging responsibility gaps by making autonomous systems more answerable to people in workplace, healthcare and financial settings.
Professor Shannon Vallor, who holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence in the School of Philosophy, Psychology and Language Sciences and is Director of the Centre for Technomoral Futures at the Edinburgh Futures Institute, is leading the research team.
The project brings together expertise from across the University and includes Dr Tillmann Vierkant of the School of Philosophy, Psychology and Language Sciences, Professor Michael Rovatsos and Dr Nadin Kokciyan of the School of Informatics and Dr Nayha Sethi of the Centre for Biomedicine, Self and Society.
Other partners include the Scottish Government’s Digital Directorate, the NHS Artificial Intelligence Laboratory (AI Lab) and software analytics specialists SAS.