Professor Michael Rovatsos

Professor of Artificial Intelligence

Background

I am Professor of Artificial Intelligence at the School of Informatics, part of the Artificial Intelligence and its Applications Institute (AIAI), and academic liaison for the University of Edinburgh at the Alan Turing Institute, the UK's National Institute for Data Science and AI. From 2018 to 2023, I was Director of the Bayes Centre, the University's innovation hub for Data Science and AI, and I also coordinated the University's AI Strategy as Deputy Vice Principal Research (AI) from 2020 to 2022.

My personal journey in AI started in 1999,  while working on my undergrad thesis at the University of Saarbrücken. After a year working as a software engineer at an AI startup in Frankfurt, I started studying toward a PhD at the Technical University of Munich, developing methods for to enable agents learn optimal interaction strategies when using structured communication languages. I joined the University of Edinburgh as a Lecturer after completing my PhD in 2004, and have been at the University ever since, progressing to Senior Lecturer in 2013, Reader in 2017, and Professor in 2019. My research is in multiagent systems, with a focus on the development of ethical and responsible AI algorithms. You can read more about my work below and in my CV.

I was born in Greece but spent most of my early years in Germany before moving to Scotland, in case you cannot work out where my accent is from. 

 

CV

PDF icon 95826.pdf

Qualifications

PhD in Computer Science (Dr. rer. nat.), Technical University of Munich, summa cum laude, 2004

Diploma in Computer Science (Dipl.-Inform.), University of Saarbrücken, Germany, first class, 1999

Responsibilities & affiliations

  • Member, Scottish AI Alliance Leadership Circle

  • Associate Editor, Journal of Autonomous Agents and Multi-Agent Systems

  • Chair, University IT Committee

  • Member, University AI and Data Ethics Committee

  • Member, Centre for Statistics Steering Committee

Undergraduate teaching

I have had limited teaching responsibilities since I have taken on substantial leadership roles as Director of the Bayes Centre and, subsequently, Deputy Vice Principal of Research. Since then, I have primarily focused on non-traditional educational formats,  co-leading the development of a multidisciplinary MOOC in Data Ethics, AI, and Responsible Innovation, and giving introductory tutorials on AI Ethics at major international conferences and summer schools like IJCAI and ACAI.  

Previously taught courses:

Postgraduate teaching

Open to PhD supervision enquiries?

Yes

Areas of interest for supervision

I am always interested in exceptional PhD students with a strong background in AI (especially multiagent systems, ethical and human-friendly AI, automated planning, game-theoretic AI, neuro-symbolic approaches). I try to contribute to increasing the representation of female researchers and those coming from other underrepresented groups in AI - if that could be you, please consider yourself specifically encouraged to apply! 

If you would like to study under my supervision it would be useful to indicate what research topic you are interested in, and to include your CV, transcripts, and a sample of your technical writing (e.g. a report from a course you took, your undergraduate dissertation, etc). Please also familiarise yourself with the PhD application process, funding schemes and deadlines before you contact me so we don't start to talk about science when there's little chance you would be admitted to one of our PhD programmes, or you could fund your PhD study.

If you would like me to be considered as a supervisor for your application, you need to apply to this PhD programme, and name me as a proposed supervisor - though I would strongly advise contacting me before preparing your application. 

I try to respond to all enquiries that suggest there is a good chance the applicant's research interests overlap with mine, but please understand this can sometimes take up to a couple of weeks during very busy times.

 

Current PhD students supervised

I am currently principal supervisor for Claire Barale, who works on human-AI reasoning in law (co-supervisor Nehal Bhuta, Law).

I am currently second supervisor for Jake Barrett, Participatory decision making and AI (supervisor: Kobi Gal, Informatics)

Past PhD students supervised

As principal superviser:

  • Savina Kim (PhD, completed), Fairness and bias in credit decisioning, September 2020 to April 2024 (jointly supervised with Galina Andreeva, Business School)
  • Matei Craciun (MScRes, completed) Learning in coalition formation, September 2014 to August 2015
  • Alexandros Belesiotis (PhD, completed), Argumentation-Based Conflict Resolution in Planning, September 2007 to May 2012
  • George Christelis (PhD, completed), Automated Norm Synthesis, September 2007 to August 2011
  • Matthew Crosby (PhD, completed), Heuristic Multi-Agent Planning, March 2009 to October 2013
  • Iain Wallace (PhD, completed), Practical Social Reasoning in the ESB Framework, September 2006 to August 2010
  • Xavier Rafael Palou (MScRes, completed), Distributed Collaborative Learning, September 2006 to August 2008

As second superviser:

  • Nick Hoernle, Modelling knowledge in exploratory learning environments (supervisor: Kobi Gal)
  • Can Cui, Games for Genome Analysis (supervisor: Dave Robertson)
  • Paolo Pareti, Procedural Knowledge on the Semantic Web (supervisor: Ewan Klein)
  • Sergio Elizondo, Demand and Supply Matching in SmartGrids (supervisor: Nigel Goddard)
  • Pavlos Andreadis, Decision Making and Preference Elicitation (supervisor: Subramanian Ramamoorthy)
  • Alan White, Improving Multiagent Plan Robustness through Policy-Driven Maintenance and Repair (supervisor: Austin Tate)
  • Stefano Albrecht, Ad Hoc Team Formation and Multiagent Reinforcement Learning (supervisor: Subramanian Ramamoorthy)
  • Herry Herry, Automated Composition of Artifacts in Behavioral Signature Models (supervisor: Paul Anderson)
  • Majd Hawasly, A Framework for Multi-Robot Strategic Decision Making (supervisor: Subramanian Ramamoorthy)
  • Ashwag Maghraby, A Structural Synthesis System for Argument Protocols from High-Level Descriptions (supervisor: Dave Robertson)
  • Areti Manataki, Agent-Based Supply Chain Management Systems (supervisor: Jessica Chen-Burger)
  • Paul Martin, Common Dialogue Artefacts as a Basis for Agent Society (supervisor: Dave Robertson)
  • Conrad Rider, Simulating Human Decision Making in Environmental Agent Based Models (supervisor: William Mackaness)
  • Paolo Besana, Dynamic Ontology Mapping in Multiagent Systems (supervisor: Dave Robertson)

Research summary

My research is in multiagent systems, an area that is concerned with designing the computational methods needed to coordinate the activities of multiple independent and interacting actors. These actors can be artificial software agents (e.g. in algorithmic trading or online ad auctions) or human users connected through online platforms (e.g. workers on sharing economy platforms, or buyers and sellers in online marketplaces). 

The multiagent systems perspective on AI focuses on "decentralised intelligence" by considering multiple agents, each of whom aims to achieve their own objectives while interacting with each other. These individual objectives may stand in conflict with each other, so if we want to make sure the whole “society” functions as we would like it to, the question becomes one of designing the rules of engagement, incentives, and mechanisms to ensure smooth and productive collaboration between agents.

In my work, I use an eclectic mix of AI techniques (from knowledge-based to game-theoretic and machine learning based techniques) and collaborate extensively with social scientists, human factors experts and users of real-world systems. I have been very lucky to work with many amazing PhD students, postdocs, visitors and collaborators over the years, who have made this research possible.

Current research interests

Since around 2014, the focus of my work has been on ethical AI, where I develop architectures and algorithms that support transparency, accountability, fairness, and diversity-awareness. While much of the debate around the ethical risks of AI can be rather speculative, I am most interested in making sure the concrete computational mechanisms used by AI-driven systems are aligned with human values. In a multiagent systems context, this mostly means creating mechanisms to elicit users' and stakeholders' views and translate them into concrete constraints and optimisation criteria for algorithms and design principles for algorithms.

Past research interests

I have previously done extensive research in agent communication, multiagent planning, argumentation systems, multiagent learning, social reasoning, and on norms, trust and reputation. You can find research papers on these topics in the publication section below.

Affiliated research centres

Project activity

I have been involved in externally funded grants worth over £18 million that received support from UK and European funding bodies and industry. Major ones have included 

  • SmartSociety, where I led work on social orchestration systems for coordinating collective human activity in online platforms
  • ESSENCE, where  I coordinated a European Early-Career Training Network focusing on technologies to align meaning among humans and machines, 

  • UnBias, which developed fair algorithms to help address people's concerns about algorithmic bias, and
  • ReEnTrust, where we built AI mediation tools for resolving conflicts between the users of online data-driven platforms and those platforms
  • CyCAT, which established a centre for algorithmic transparency in Cyprus 
  • Enabling Advanced Autonomy through Human-AI Collaboration which attempted to develop a new vision for next-generation AI systems

In the Making Systems Answer project we try to understand what answers users expect from AI systems in terms of responsibility and how we can develop methods to provide these answers. I was also recently involved in Creative Informatics, a major programme to drive innovation in the creative industries through the use of data and AI.

Current project grants

Making Systems Answer, UKRI Trustworthy Autonomous Systems Award, £559,682 (2022-24)
Ethical Human-AI Reasoning in Law, Cisco University Research Fund Award, $139,687 (2020-23)

Past project grants

Creative Informatics Cluster, AHRC/SFC Creative Industries Cluster Award, £5,999,090 (2018-23)
Enabling Advanced Autonomy through Human-AI Collaboration, EPSRC Turing 2.0 Award and University of Edinburgh, £371,313 (2021-22)
ReEnTrust, EPSRC Research Grant, £1,211,004 (Edinburgh share £440,878) (2018-21)
CyCAT, H2020 Research Grant, €999,965 (Edinburgh share €219,296) (2018-21)
Edinburgh Vishub, EPSRC Capital Award, £200,000 (2018-20)
UnBias, EPSRC Research Grant, £1,139,110 (Edinburgh share £218,327) (2016-18)
Research for Emergency Aftershock Response, NERC Global Challenges Research Fund Project, £198,904 (2016-17)
ESSENCE, FP7 Marie Curie Initial Training Network, €3,990,183 (Edinburgh share €1,235,026) (2013-17)
SmartSociety, FP7 FET Integrated Project, €6,829,664 (Edinburgh share €841,142) (2013-16)
BioNet Agents, FP6 Marie Curie Transfer of Knowledge, €165,000 (Edinburgh share €58,348) (2006-07)