Similarity scoring should be a secondary consideration for online assessment. Much more important factors, from my point of view, are ease of marking for academics; access to quality feedback for students; and innovative authentic assessment workflows.
Turnitin are close to monopolising the market on similarity scoring of student papers but many assessment platforms already use Turnitin and Urkund as plugin services to provide similarity scoring.
Where should we be focusing our effort at UoP?
As an institution one of our strengths lies in quiz/question-based assessments. This is particularly the case in the Science and Technology faculties. We have a mature sophisticated platform in Moodle to deliver these types of assessments and a deep level of staff expertise across the organisation, which has developed further through-out the pandemic.
The risk factors for UoP include a need to increase capacity for online exams (or diversify some of our assessment types onto an external platform at peak periods) and the ability to be able to innovate in terms of essay/file-based assessments.
From what I can see, Turntin has stagnated in terms of assessment innovations in recent years and have not yet improved service reliability at key assessment periods by migrating their platforms to a service like AWS. This has been promised repeatedly but not delivered on as yet.
This is potentially a reason why we saw growth in Moodle assignment and quiz usage during the pandemic rather than a big increase in Turnitin usage (trust in the reliability of the service and flexibility of the functionality).
So where could we focus our effort to improve the assessment tools for educators and students to gain the most benefits?
Innovative assessment workflows
Posing a long-form question to a student and easily marking the finished product should be a simple process – and it is on platforms such as Turnitin. However, we are increasingly adapting our assessments to be more authentic: assessments that more closely match how students will operate in the workplace. This often requires more sophisticated workflows and mechanisms, which should still be straightforward for academics to engage with and make sense of if they are to be successful.
Traditional paper-based exams (potentially bring your own device)
During the pandemic staff were forced to transition away from paper-based exams. Many exams were instead delivered as coursework or window assignments (e.g. a 2hr assignment within a 24hr window) or as question-based quiz exams. When exam halls are available again staff may revert back to previous paper-based solutions. After all, we know how these work and paper doesn’t need charging or a stable wifi connection. However, we can harness this forward momentum with a platform dedicated to supporting timed essay assignments on students’ own devices or University machines. Several platforms offer functionality for students to download assignments at the start of an exam with no need to have an internet connection until it’s time to submit at the end. This could represent a robust, safe exam experience that more closely matches how students study today. Who handwrites for three hours any more? I’d be willing to bet most students don’t.
There are challenges with BYOD (bring your own device) particularly around charging and ensuring student machines are reliable. Many of these challenges can be solved with a small stock of fully charged devices, which can be swapped out to students when needed. Chromebooks are ideal online exam devices for this very reason, due to their long battery life and simple configuration.
Workflows such as “feedback before grades” can help students better engage with their feedback, but better access to feedback for students in a variety of places is also key.
Services that offer a holistic view of assessment feedback, or the ability to extract these comments via API so we can build our own views, are increasingly valuable. This functionality will enable key staff such as personal tutors or learning support tutors to view student feedback as a whole (rather than in silos) to spot key areas to help students improve their academic work.
To round out where I started with this post, providing similarity checking is an important part of modern assessment – but it is a problem that has already been solved, multiple times.
If we make assessment more authentic, more flexible and more collaborative there will be less need for plagiarism detection because students will be demonstrating more of the attributes we want them to leave University with. I accept this is perhaps an overly idealistic viewpoint, as there are a lot of students to assess each year, but this is more reason to explore flexible assessment solutions that can make the lives of academics and students a bit easier.