In this episode of When Technology Goes Wrong, host Paddy Dhanda is joined by philosopher of technology Tom Chatfield to revisit one of the UK’s most controversial uses of AI: the 2020 A-Level grading algorithm.
Designed to automate student results during the pandemic, the system quickly faced backlash for its biased outputs and lack of transparency. Together, Paddy and Tom unpack what went wrong, and what it means for how we implement AI in high-stakes decisions.
What you'll learn:
How the A-Level grading algorithm worked — and failed
The role of human oversight in AI systems
Why fairness isn’t just about data — it’s also about perception
The societal risks of deploying technology without accountability
Lessons for schools, businesses, and public institutions
Guest spotlight:
Tom Chatfield is a philosopher of technology and author of Wise Animals, exploring the impact of digital systems on how we think, decide, and live.
Memorable quote:
“Fairness is not just about a statistically sound result — it's about perception and human judgment.” – Tom Chatfield
Resources:
Wise Animals by Tom Chatfield
🎧 Listen now on Substack, Spotify, Apple Podcasts, or your platform of choice.
If the episode resonates, please subscribe, rate, and share.
Share this post