Access to Expertise

Ethics matters

Using data-driven AI raises important ethical questions, many of which – but not all – can be addressed by pursing a responsible approach to innovation.

Baye s- AI 4

 

OpportunityMatch is a fairly simple system that has a lot in common with search engines and tools many of use use on a daily basis. Nonetheless, it raises important issues regarding the potential risks and harms of AI, which we had to address during its development.

The most important of these is the issue of algorithmic bias, which is a phenomenon associated with machine learning algorithms that has been the subject of many recent debates around the ethics of AI. Roughly speaking, algorithmic bias is present when a system produces different results for different types of people that systematically disadvantage certain demographics. This is particularly serious when these groups are considered vulnerable, when the disparate treatment is unintended, and when the output of the system influences decisions that might impact their lives.

Machine learning methods like those used in OpportunityMatch are in the spotlight of debates around algorithmic bias, and the discriminatory effects it might have. This is because they (a) detect patterns that may be hidden in the data, even if they do not use any sensitive personal data, (b) their behaviour may change over time as they are trained on new data, and (c) because some of these methods are unable to explain their results in a human-understandable way. Taken together, these properties might imply that bias would be hard to detect, monitor, and rectify.

As an example, imagine – hypothetically - that female researchers have a different writing style from their male colleagues, in that they systematically use some words more or less frequently, for example those originating in military jargon (e.g. “recruitment”, “strategy”, “deployment”, “campaign”). While OpportunityMatch knows nothing about whether the authors of research papers are male or female, it might come to associate those words more with male authors. This could result in outputs from male authors appearing more often in search results, especially if the search queries use these words frequently. (This might be the case, for example, if many queries come from a business or funding agency context, where the use of this kind of jargon, one could argue, is common.)

The methods we use in OpportunityMatch do not allow us to analyse whether such bias is present directly, e.g. if we want to check whether it is present for different protected characteristics of researchers, such as sex, race, ethnicity, age, or disability. This is an inherent problem with certain types of machine learning algorithms that researchers are working on, but at the moment the best we can do is run statistical comparisons for these different groups to compare whether they are systematically disadvantaged.

Another dilemma that appears in this context is whether or not to use sensitive information about people associated with the training data – if it is available in the first place. In our case, as the University database we use does not contain this information, we opted not to use it in the current type, though this is something we might consider doing in the future. We have attempted to predict things like sex, ethnicity, and race from researcher names, using other algorithms, but these are unreliable, and create their own issues in terms of bias.

Moreover, we have to wait until we have a representative set of real user queries to be actually able to systematically monitor for bias – and this highlights another issue, which is that, collectively, user behaviour influences the system’s behaviour over time. This creates a “shared responsibility” problem, which is difficult to navigate. On the one hand, users should be free to search for anything they like, and those who own the system should not distort their input. On the other, users’ unconscious biases may further exacerbate biases that are embedded in the historical data OpportunityMatch uses from institutional resources.

In the development of OpportunityMatch, we worked closely with many different stakeholders at the University to guide our approach to managing these risks. We opted to use only publicly available data to train the system, where we only process the textual description of each research output. This means that we do not consider the funding volume of grants, citation counts of papers, or the classification of different journals.

 

 

We also require an opt-in decision from all users to share their searchers and to receive notifications from the system, and we deliberately did not add any in-app communication, leaving users to communicate with each other through external means – nobody should experience a negative impact if they choose not to use the system, or want to use it without sharing any information about their usage with others.

Users can edit and delete all data they have contributed to the system, and they can request that even their PURE data (which is available through other public-facing system) is removed from it. Needless to say, the system does not track users’ other online activity in any way, and it does not modify any authoritative institutional databases in any way. Going forward, we are also considering the addition of features to enable users to highlight or filter results from underrepresented groups.

Many of these decisions demonstrate that there is a range of issue that need to be considered when addressing ethical issues in AI-based systems – which are often not directly related to the use of AI, but to the ways in which an overall system is designed and used in a product or service. Projects such as OpportunityMatch provide us with real-world systems we can shape in controlled and responsible ways. It is worth noting that little is known about how the tools most of us use on a daily basis – including those that focus on exploring research activities - deal with these issues.  

Undoubtedly, systemic inequalities exist in the world of research and academia, which we must counteract. AI systems have the potential to exacerbate those, but it is only through careful and considered experimentation with them that we can learn how to best manage these risks, while using the opportunities AI gives us to deliver value to the University community.