AI as a Force for Good

Back to Main Page

Concerns around the negative impact of AI on equality and diversity, privacy, social justice, and democracy highlight that citizens, businesses, and governments around the world are struggling to understand how they can access the opportunities AI offers while avoiding its potential harms.

This has led to a flurry of initiatives driven by civil society, industry, governmental, and transnational organisations to create ethical frameworks for AI, design new regulation, and establish bodies that will provide oversight and policy advice. All of these recognise that it is important to translate high-level ethical principles to practice, but this is not proving to be straightforward at all.

At the University of Edinburgh, we pursue an ambitious programme of interdisciplinary research to address this AI ethics challenge, advocating an approach that focuses on tackling ethical issues surrounding AI in context, in close engagement with the stakeholders involved in and affected by new AI innovations, and with an emphasis on the inextricable connection between technology and its users.

We engage in a combination of technical, humanities, and social research, focusing on a set of core themes:

  1. Developing moral foundations for AI
  2. Anticipating and evaluating the risks and benefits of AI
  3. Creating responsible innovation pathways for the adoption of AI
  4. Developing AI technologies that satisfy ethical requirements
  5. Transforming the practice of AI research and innovation

We believe that that bringing these lines of investigation together has the potential to make a step change in the readiness of society to tackle the AI ethics challenge.

Our ability to support this research builds on the strength of our School of Informatics, which is the largest research cluster in the UK in AI, as well as a range of other capabilities across many disciplines:

The Centre for Technomoral Futures at the Edinburgh Futures Institute is a new £5 million research centre that focuses on data and AI ethics, bringing together humanities, social science, and technical experts to develop technomoral wisdom in the design of possible futures. It features a dedicated interdisciplinary PhD programme and collaborations with scientists across the breadth of the University.

Our Institute for the Study of Science, Technology and Innovation has been instrumental in shaping the discipline of Responsible Research and Innovation, with a focus on studying digital sociotechnical systems that has involved foundational work on this topic going back to the early 1980s.

Our Digital Influence and Intelligence Lab uses digital monitoring and experimental approaches to better understand digital influence, bringing together computational techniques such as social media analytics with design, neuroscience, and social computing. Its research is central to debates on the conduct of foreign and security policy, the future of democracy, media, freedom of expression, international development, and the future of civic and private life.

The Trustworthy Autonomous Systems Governance and Regulation Research Node is a new £3.2 million research collaboration between the University and a range of major industry, public sector, and academic partners that aims to establish a new software engineering framework to support governance of AI systems, and trial them with external stakeholders in areas including mobile autonomous systems and health and social care, complementing new methods of governance with new computational tools for regulators and developers.

The five hubs established through the Data-Driven Innovation Programme provide a unique environment to work with external partners and communities on the real-world adoption of AI, and have already established major collaborations e.g. on Data for Children with UNICEF, on advanced care research with Legal & General, on next-generation manufacturing with Babcock, and with over 50 companies brought into our ecosystem through the Bayes Centre alone.

The development of our unique Edinburgh International Data Facility builds on extensive experience in doing trusted data research with sensitive data provided by external partners, e.g. by hosting an NHS Data Safe Haven and working with banking data in collaboration with our recently established £25 million Global Open Finance Centre of Excellence.

Building on this excellent interdisciplinary connectivity created over the past forty years, we are developing new propositions on the sustainability and governance of AI systems, and pursuing these opportunities through close involvement in the development of Scottish, UK, and European AI strategies.

 

Back to Main Page