Helping students to write readable code

Helping students to write readable code - a peer approach using Adaptive Comparative Judgement (ACJ)

Team Members:  Paul Anderson, Timothy Hospedales, Ross McKenzie, Anna Wood

School: Informatics

Abstract

Good computer code is not just code that “works”. It is code which is easy for other people to read and understand, easy to extend later, and sufficiently clear that there are no hiding places for obscure bugs. Good programmers understand this from experience, but this can be a hard concept for students to appreciate, and for markers to assess consistently. We plan to create a tool which allows students to compare the readability of code submitted by their peers. This will expose the students to a range of different styles and encourage them to think about different approaches.

By applying a technique known as "Adaptive Comparative Judgement", we will also be able to generate a ranking from these pairwise comparisons which can be used to inform the assessment process - we believe that this has the potential to produce a more consistent assessment, with less effort, at the same time as being particularly suitable for large courses. We will evaluate this by comparing the results with a traditional marking process, and with comparisons made by demonstrators and teaching assistants.

We will make this tool directly available to other courses in Informatics, but we also expect the technique to be applicable to other disciplines, and particularly suitable for large courses and assessment involving some subjective judgement.

Final project report

Download the final project report now (PDF)

Other project outcomes