Analysing online reviews
A University study has shed new light on how online rating systems work.
Online reviews of products and services - such as those found on the websites Amazon or TripAdvisor - are more likely to be useful if only a small number of people have given feedback, the study finds.
People giving online reviews exaggerate their scores in surveys where many others have already contributed. They do so to try to increase the impact of their response, researchers found.
As a result, online surveys that have received many scores are more likely to be affected by extremely good or bad ratings, distorting results for consumers, according to the University research.
Conversely, surveys in which smaller numbers of people have responded often contain more measured responses, which potential buyers may find more useful and influential.
The research paper appears in the Journal of Public Economics.
The phenomenon also occurs in online feedback systems such as surveys posed by news media organisations.
Online surveys on topical news issues - such as how favourable people are towards Scottish independence - elicit more extreme responses from readers as they become more popular, making their results more inaccurate, the study found.
Online vendors who allow consumers to rate their purchases should consider capping the numbers of people who can respond to provide a more helpful guide to customers, the researcher says.
Alternatively, star rating systems could be replaced with a simple like or dislike button, similar to that found on YouTube.
Online rating systems are increasingly influential in how people make everyday decisions, from where to go on holiday to what consumer items to buy. But these systems do not work perfectly. This research shows that when making decisions online, consumers may be better off using surveys in which a smaller number of people have contributed their opinions.