On 22 February I took part in a roundtable debate on the topic “AI and Higher Education: Is it time to rethink teaching and assessment?”, the event being organised and facilitated by Graide, a UK-based Ed Tech company that uses AI to provide improved feedback in STEM subjects. (I dislike the term ‘artificial intelligence’ in this context, but I think I am fighting a losing battle here. In the interests of clarity, I’ll use the term AI in this blog post.)
Given the recent furore around generative AI, and its ability to create human-like outputs, Graide thought it would be timely to bring together a variety of voices – senior managers, academics, developers, students – to discuss the potential impact of this new technology on higher education. I was joined on the panel by Bradley Cable (student at Birmingham University); Alison Davenport (Professor of Corrosion Science at Birmingham University); Ian Dunn (Provost of Coventry University); Manjinder Kainth (CEO of Graide); Tom Moule (Senior AI Specialist at Jisc); and Luis Ponce Cuspinera (Director of Teaching and Learning at Sussex University).
It was fascinating to hear the range of opinions held by the panel members and by the 400+ people who attended the event (and who could interact via polls and via chat). If you are interested in my opinion of the technology then you might want to watch a recording of the debate; alternatively, in the paragraphs below, I’ll attempt to summarise my feelings about Bing, ChatGPT, and similar programs.
* * *
It is easy to see why there should be fears about this technology, particularly around assessment: students might pass off AI-generated content as their own. Critics of the technology have numerous other, entirely valid, concerns: the models might produce biased outputs (after all, they have been trained on the internet!); companies will presumably start to charge for access to AI, which raises questions of equity and digital poverty; the output of these models is often factually incorrect; and so on and so on.
But this technology also possesses the clear potential to help students learn more deeply and lecturers teach more effectively.
I believe that if we embrace this technology, understand it, and use it wisely we might be able to provide personalised learning for students; design learning experiences that suit a student’s capabilities and preferences; and provide continuous assessment and feedback to enable students themselves to identify areas where they need to improve. The potential is there to provide at scale the sort of education that was once reserved for the elite.
Note the emboldened if in the paragraph above. To obtain the outcome we desire we need to embrace and explore this technology. We need to understand that the output of large language models relies on statistical relationships between tokens; it does not produce meaning – only humans generate meaning. And we need to use this technology wisely and ethically. It is not clear at this point whether these conditions will be met. Instead, some people seem to want to shut down the technology or at least pretend that it will have no impact on them.
I have heard numerous academics respond to this technology by demanding a return to in-person, handwritten exams. (Would it not be better to rethink and redesign assessment, with this new technology in mind?) I have even heard some lecturers call for a complete ban on this technology in education. (Is that possible? Even if it were, would it be fair to shield students from tools they will have to use when they enter the workforce?)
* * *
Fear of new technology dates back millennia. Plato, in the Phaedrus, a work composed about 370 BCE, has Socrates argue against the use of writing:
“It will implant forgetfulness in their [the readers] souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”
Ironically, we only know about Plato’s argument against writing because it was written down.
More recently, some critics argued that the introduction of calculators would impair students’ mathematical ability. (The research is clear: children’s maths skills are not harmed by using calculators – so long as the devices are introduced into the curriculum in an integrated way.) Even more recently, some people argued that spellcheckers would impair students’ ability to spell correctly. (It seems the reverse might be the case: students are getting immediate feedback on spelling errors and this is improving their spelling.)
Perhaps it is a natural human response to fear any new technology. And in the case of generative AI there are legitimate reasons for us to be fearful – or at least to be wary of adopting the technology.
But the technology is not going to go away. Indeed, it will almost certainly improve and become more powerful. I believe that if we are thoughtful in how we introduce AI into the curriculum; if we focus on how AI can support people to achieve their goals rather than replace people; if we produce a generation of students that use the technology effectively, ethically, and safely – well, we could transform education for the better.
Credit Image: Photo by Stable Diffusion 2.1