An Australian university is fighting the risks of AI in exams by reviving ancient traditions

The viva voce (Latin for "with the living voice") originally emerged as a teaching method in medieval European universities. However, in modern educational contexts, it has transformed into an examination process, especially in graduate studies, where it has become a highly regarded method to confirm that doctoral students have produced new knowledge. Likely, viva voce examinations are also typically reserved for occasions such as doctoral defenses because of their limitations: oral examinations are not very scalable and risk placing introverted and international students at a disadvantage.

However, in a world where virtual examinations and the widespread availability of generative AI make it next to impossible to use written assessments as an examination method to confirm whether students are learning anything at all, some universities are reconsidering the place of oral assessments in their educational toolkit. This is the case of the University of South Australia (UniSA), which has been experimenting with replacing students' written final exams for oral assessments in some of its science degrees.

Dr Chris Della Vedova, a senior lecturer at UniSA, promoted this initiative after noticing essays and multiple-choice exams became ineffective means of evaluation during the pandemic. As an alternative, his team developed a 20-minute conversational exam format, where examiners drew questions from a pool of questions covering the course material. Once students answered the questions, examiners could follow up with more questions designed to make students expand their answers or place them in the context of the entire course. Since implementing the viva voce system in 2022, UniSA claims there haven't been academic integrity breaches in the final examinations.

Not everyone agrees with UniSA's strategy. Toby Walsh, professor of artificial intelligence at the University of New South Wales, has pointed out that scaling oral examinations is impractical, especially in courses with hundreds of enrolled students and insufficient staff to sit in the exams. Oral examinations also make it difficult to compare students objectively, which he claims can be done using an oral assessment. For now, Walsh is turning to the open-book written examination, another evaluation tool that has proven worthy in an era in which educational institutions of all levels scramble to catch up and, if possible, get ahead of the breakneck pace of generative AI.