Is the algorithm working for us?

Algorithms, qualifications and fairness

14 June 2021

By Roger Taylor

2 minute read

In 2020, Covid closed 90% of the world’s schools. In the long list of harms caused by the pandemic, the disruption to the education of a generation will be felt for a long time. One acute aspect of this was the difficulty countries faced in administering examinations and deciding which children were eligible for higher education and other opportunities.

Some countries moved exams online, some delayed them, others held them in socially distanced environments. Governments in the UK and the Republic of Ireland took an unusual – in some eyes the most extreme option [1] – of using other information sources, teacher-assessed grades and statistical forecasts, to predict what young people would have achieved had they taken an exam. The plan did not work.

Ofqual’s algorithm prompted huge public backlash. In a new report, commissioned by CPP and to be debated at an event on Wednesday, 16 June, Ofqual’s former chair, Roger Taylor, explains that the algorithm was not biased, and it did what it was asked to do. What it was asked to do was not fair.

This paper sets out some personal reflections on the causes of the problems in 2020 together with some tentative views about how we rebuild after the pandemic.

Roger Taylor notes that the growth of AI and algorithms in recruitment systems poses a challenge to the traditional role of qualifications; to incentivise high quality learning and enable social mobility. The system needs to adapt in the face of the rise of data driven technologies in recruitment, and ensure they are fit to support the needs of citizens and employers in the coming age of AI.

CPP is looking to build on these lessons from 2020 and our own research on regional attainment gaps across the country, to set out policy recommendations for how we can create an education system that allows every young person to fulfil their potential.