Adventures in Technology Enhanced Learning @ UoP

Category: Future Technologies (Page 1 of 2)

Guest Blogger: Tom Langston – How ChatGPT Revolutionized My Writing Process – The AI version

(Co-writer: ChatGPT)

This version was generated after feeding in my attempt at writing this blog post and asking ChatGPT to ‘shorten this text into a blog post’.

I recently discovered the incredible potential of ChatGPT, and it completely revolutionized the way I approached my writing. As a student pursuing a Professional Doctorate in Education, I often struggled with crafting cohesive and focused assignments. My writing style was scattered and verbose, making it difficult for readers to follow my train of thought. Enter ChatGPT, a powerful AI language model that became my secret weapon in developing a well-structured research proposal.

Initially, I sought human help from my project supervisor, Stephen Webb, who pointed out that my writing relied too heavily on vague terms like “this” and “it.” While I understood the meaning behind my words, Stephen reminded me that readers might not share the same clarity. Armed with his feedback, I set out to improve my proposal.

When I completed the next draft, I thought I was finished. However, the word count was far too high, so I turned to ChatGPT for assistance in condensing my writing. It did the job, but the result lacked my personal touch and voice as a writer. I realized I would have to manually trim down the content to achieve the desired outcome.

During this editing process, I discovered the true power of ChatGPT. I started using it to check sentences that I struggled to articulate clearly. When I asked for grammar assistance, ChatGPT not only provided corrections but also suggested ways to enhance sentence structure and improve readability. This iterative process allowed me to refine my writing, discarding extraneous thoughts and rearranging sentences for better flow. It became evident that if a particular text no longer fit within the new leaner version, despite my attachment to it, it had to be eliminated. This experience epitomized the essence of editing.

As I continued to utilize ChatGPT, I encountered questions of ethics. While it proved immensely useful in reducing my word count, I couldn’t help but feel uneasy about presenting the AI-generated content as my own. The ethical implications of using AI in writing remain a divisive topic, with differing perspectives on its appropriateness and boundaries. To navigate this issue, some suggest referencing AI tools in an appendix, detailing their specific use and purpose in the writing process. However, this raises questions about the extent to which we should acknowledge AI contributions and how that compares to other forms of assistance, such as spell checkers or language tools like Grammarly.

One important consideration is whether AI tools are always correct. While they excel at constructing language, they may strip away the author’s voice and inadvertently introduce errors. In my experience, Grammarly sometimes made corrections to the text generated by ChatGPT, highlighting the need for human judgment in refining the output.

In conclusion, ChatGPT became a game-changer for me, transforming my writing process in ways I had never imagined. I believe it should be embraced as a valuable resource for students to tighten their drafts before seeking extensive academic support. By incorporating AI tools, we can alleviate the burden on educators, who can then focus on reviewing more polished submissions requiring minimal intervention. However, it is crucial to strike a balance and ensure that AI enhances our own voices rather than overshadowing them. AI is undoubtedly here to stay, and by harnessing its power thoughtfully, we can elevate our writing while preserving our unique identities as authors.

Thoughts from the Proofreader:

I was asked to proofread Tom’s blog post here and have a look at ChatGPT’s version too. Comparing the two, I definitely found Tom’s writing much more personable – I could imagine him speaking as I read through it. ChatGPT’s version presented the information efficiently and effectively, and I found nothing to correct. However, in its response to the prompt, it had taken away everything that made Tom sound like Tom!

This experience also really made me realise how many misconceptions I have about AI tools. I’d assumed that ChatGPT would have a very grammatical approach to its interpretation of language – rather like I’ve been told I have! However, when Tom asked it about the difference between ‘that allowed’ and ‘allowing’, ChatGPT talked with authority about implications and potential meanings. This answer was a long way from my interpretation, which attributed the difference to the grammar of relative clauses (X refers to one thing, Y refers to another). As Tom demonstrated with his irony example, it’s worth being cautious with how far we trust its responses. And I think we can be confident that human input will still be needed for a few years (or at least months) yet. 

Credit Image: Photo by Bram Naus on Unsplash

Guest Blogger: Tom Langston – Last Night ChatGPT Saved My Life…

The classic line from Indeep’s 1982 post-disco hit “Last Night a DJ Saved My Life” serves as the basis of my title. As ChatGPT did – not literally, but figuratively – save my life.

I am currently studying for the university’s Professional Doctorate in Education and, since February, have been completing my final taught assignment entitled “Research Proposal”.

It takes me a while to write, as I don’t find it easy, and I am not that good at it. My approach to writing is best described as little and often, frequently getting distracted and losing focus on the task at hand. If you have ever seen Dave Gorman’s “Googlewhack Adventure”, you will understand my problem: his book and stage show outlined how he was meant to write a fictional novel and ended up writing and performing about his experiences of Googlewhacks (Googlewhacking?) instead. He got distracted by less important but much more fun endeavours. 

The other problem is that I don’t get to the point. I write a verbose mess of unconnected thoughts. So with that in mind, I am going to explain how the rise of the machines (we are not far away from ChatGPT working out it wants to be Skynet) assisted me in writing my “research proposal” assignment. 

First Drafts

To start with, I had human help. Stephen Webb, as my project supervisor, read a draft I had written. He demonstrated that a lot of my writing relied on small words to, as he put it “do the heavy lifting”. Using ‘this…’ and ‘it…’ frequently because I understood what I was trying to say –  he pointed out that the reader might not. The only reason he could even start to understand my work was because he knew me and the context of my subject. 

From his extensive feedback, I redrafted, edited and tried to improve my proposal. 

After completing the next draft, I thought I’m done. However, I was well over the word count, so I put sections into ChatGPT and told it to re-write it within my word limit. It only bloomin’ did it! The problem was it was very sterile; it lost all sense of who I am as a writer. This meant I was going to have to manually get the word count down. 

After another version, I asked Stephen for some clarification on some of his earlier feedback. His reply was:

“In the section you sent through, there are still individual sentences that are hard to parse. The initial word of a sentence or clause primes the reader to think in a certain way; if the sentence or clause doesn’t deliver, then confusion arises. You really should pass this through a grammar checker, Tom.”

Not being a confident writer, I already knew that my writing needed work. What this feedback demonstrated to me was that I lacked cohesion and structure. My train of thought constantly jumping around the entire document is a by-product, I guess, of writing in such short unfocussed bursts. 

I wrote a heading titled “Introduction” but did not actually introduce anything. As Stephen had pointed out, I had poorly communicated my narrative. What this meant was that readers were getting lost in my disconnected thoughts. I could now see that certain sections felt shoe-horned in because I liked what I had written. Realistically, those “gems” made no sense to the reader as they were not connected with the text surrounding them. 

I tried to use ChatGPT to see if it could make sense of it, but there was too much there for it to tell me how to rearrange it. The system did not accept the entirety of the text and could not provide clarity to the sections I had written. ChatGPT could not offer clear guidance for how best to rework and rearrange it.

A Quick Aside

At this point, on re-reading the above section, I think it nicely illustrates the problem I was talking about. I have said the same thing here in three different ways. 

  1. I tried to use ChatGPT to see if it could make sense of it, but there was too much there for it to tell me how to rearrange it. 
  2. The system did not accept the entirety of the text and could not provide clarity to the sections I had written. 
  3. ChatGPT could not offer clear guidance for how best to rework and rearrange it.

I only need to use one. In the past, Stephen had raised this issue with me, but it is a hard thing to change. What I am going to discuss next is my recent experiences of using ChatGPT as a tool to develop the parts in my writing I would not have picked up on myself. Heck, as I go back through this blog, I’m sure I’ll find more. I just won’t mention anything I’ve changed, so you’ll never know… 

Editing with ChatGPT

As I worked my way through the editing process, I started to use ChatGPT to check sentences I wanted to keep but was struggling to make coherent. They were either too long, or the idea I was trying to convey was not coming through clearly enough. 

An example of this would be when I asked ChatGPT to “check the grammar”:

My original question:

My situation should be made clear to the participants, allowing them to have fully informed consent, but to also highlight that if they perceive issues in the current hybrid environments I am hoping to act on their concerns and not silence their voices.

ChatGPTs reply:

The grammar of the provided text is correct. However, there are a few suggestions for improving clarity and readability:

“My situation should be made clear to the participants, enabling them to provide fully informed consent. Moreover, it is important to emphasise that if they perceive any issues in the current hybrid environments, I am committed to addressing their concerns and not silencing their voices.”

In this revised version, I made a few adjustments to enhance the sentence structure and clarify the message.

It was this repeated process that helped me (despite, at times, slowing me down) as I kept re-reading the submission each time, trying to tighten up my writing. Finding new thoughts that were slightly out and attempting to move them to the correct place. It became a good rule of thumb that, if the text I wrote did not fit in the new, leaner version, despite how much I wanted to keep it, it was gone. (Well done Tom – you just discovered the true meaning of editing!)

Below is an example of how I started to use ChatGPT to my advantage so I could improve my writing. 

My question to ChatGPT:

Should it be “that allowed” or “allowing” in the following text – 

Before the pandemic only the most confident teaching staff employed interactive tools (Nearpod, Mentimeter, etc.) within a lecture space that allowed students the ability to creatively engage with the material (Tan et al., 2020).

ChatGPTs reply:

Both “that allowed” and “allowing” are grammatically correct in the given sentence. However, the choice between them depends on the intended meaning and emphasis.

“That allowed”

Using “that allowed” suggests that the interactive tools were specifically designed or configured in a way that granted students the ability to creatively engage with the material. It implies that the tools had a built-in functionality or feature enabling such engagement.

“Allowing”

Using “allowing” suggests that the interactive tools, in a general sense, provided the opportunity or capability for students to creatively engage with the material. It implies that the tools had the potential or capacity to facilitate such engagement.

Both options are valid, but if you want to emphasize that the interactive tools had a specific functionality designed to enable creative engagement, “that allowed” would be more suitable. If you want to convey a more general sense of opportunity or capability, “allowing” would be a better choice.

As you can see, ChatGPT helped to explain how the slight difference in the sentence structure changed the meaning of what I was writing. I was able to consider what it was I’m trying to say and use the most appropriate version. 

Through all my previous doctoral assignments, I had a different tutor who helped me with my drafts. They told me they liked my writing style, which gave me the impression I was good at it. I can’t say what they specifically liked about my writing. I can make an educated guess that, because it was a level 8 assessment, my mistakes were forgiven. The marker was able to interpret the little words “that are doing the heavy lifting” (as Stephen had pointed out) more easily than your ‘average’ reader. Stephen helped me understand that it is one thing to have an interesting voice running through your text but is quite something else to actually be good at writing. 

The ethics of using AI

When I got ChatGPT to reduce my word count, I spoke with Stephen about the ethics of it and (ignoring the sanitisation of the output) it felt like a great way for a writer to do this. However, it felt wrong to take what ChatGPT had created as my own.

There is going to be an ethical divide between those who see using any form of AI as wrong and those who see it as a tool for improvement. The problem (as always) lies in the grey area and the boundaries of where people choose to deploy it, for example how far the tool is shaping their work beyond what would have been possible by them alone. While knowing it might be unethical, some will use it due to other commitments (work, family, etc). This scenario is a foreseeable eventuality, much like those who copied other work or paid essay mills for their work. But perhaps AI may feel slightly more ethical? As I am about to discuss, maybe a strong referencing code is what is required. But people (I was going to put students, but felt that unfair as we all have a personal line we move and adjust depending on the subject) will always push and flex the boundaries of fairness.

Referencing AI

In a recent ALT mailing list post, the question was asked about referencing when AI was used to  support work. The reply pointed to Boston University’s faculty of computing and data science guidance “Using generative AI in coursework”. The post highlighted this text:

“When using AI tools on assignments, add an appendix showing

(a) the entire exchange, highlighting the most relevant sections;

(b) a description of precisely which AI tools were used (e.g. ChatGPT private subscription version or DALL-E free version),

(c) an explanation of how the AI tools were used (e.g. to generate ideas, turns of phrase, elements of text, long stretches of text, lines of argument, pieces of evidence, maps of conceptual territory, illustrations of key concepts, etc.);

(d) an account of why AI tools were used (e.g. to save time, to surmount writer’s block, to stimulate thinking, to handle mounting stress, to clarify prose, to translate text, to experiment for fun, etc.).”

This almost feels as if there are too many barriers to use for a subject (or level) where you may be writing assignments of 6000+ words. Especially if you have to reference every section of text AI helped with. The follow-up question is then do you have to highlight every time spellcheck made a correction? Do you have to disclose that you used a service like Grammarly to make minor amendments? That is of course hyperbole. I think that if you get AI to generate starting paragraphs or change substantial amounts of your work, yes, this should be explained in an appendix. For re-writing a sentence and making grammatical choices, then no. That raises another question, how many sentences constitute a substantive change? 

Are AI tools always correct?

The final point relates to whether they are always correct. Now I know these tools are better than me at the construction of language. I have always been better at face-to-face conversations as a form of communication. During a conversation I can see when people don’t understand me, so adapt how and what I say accordingly. Writing does not afford the writer or reader the same luxury. So why do I ask if they are correct? As I have already mentioned, ChatGPT sterilised my voice. When asked to reduce the word count, it did what it was supposed to do but it strips out the humanity of the author (unsurprising being that it is a computer).

Another example (inside a paragraph that starts “The final point”, I do get the irony) is that Grammarly made corrections to the generated text from ChatGPT. 

This is a screenshot of a reply from ChatGPT placed within a box saying 'Both "that allowed" and "allowing" are grammatically correct in the given sentence. This box appears on the lefthand side of the screen on the righthand side is a box created by Grammarly. Grammarly has written Formality, then says to engage with the material creatively. The to-infinitive to engage has been split by the modifier creatively. Avoiding split infinitives can help your writing sounds more formal.
Another “another example” of a final point is: I put the above paragraph into ChatGPT to ask if I had used irony correctly. This is what it said:

Yes, the use of the word “irony” in this context is correct. Irony is a literary device that involves a contrast between expectations and reality. In your statement, you are highlighting the irony in two situations:

The first example of irony is that despite recognizing that language generation tools like ChatGPT are better than you at constructing language, you also acknowledge that you excel at face-to-face conversations where you can adapt and clarify your communication based on people’s reactions. The irony lies in the contrast between the effectiveness of these tools in writing and your personal strength in face-to-face communication.

The second example of irony is that Grammarly, a tool designed to enhance writing and make corrections, ends up making corrections to the text generated by ChatGPT. Here, the irony arises from the fact that a tool meant to improve the quality of the text is itself making changes to the output of another tool that is supposed to assist in generating text.

Overall, both examples demonstrate a contrast between expectations and reality, highlighting the ironic situations in your statement.

The system had not understood how I had used irony in the sentence but said that the overarching point I was making was ironic.

This image is of a person standing, with their eyes focusing as if they're looking at someone else who might be sitting down, because their chin is tilted down a little. The wording across the top of the image reads: I am still none the wiser if I used irony correctly in the parenthesis when I keep adding examples to a final point. The wording along the bottom of the image reads: And at this point I'm too afraid to ask.

Conclusion

In conclusion, ChatGPT ‘saved my life’ by allowing me to interrogate my writing in a way I have never been able to before. We should promote ChatGPT to students as a resource that can help tighten up their drafts before needing academic support. It should be utilised to alleviate the burden on academics, who are often asked to critique early drafts of work. Academics should hopefully then only see much more polished versions of submissions that require less input from them. 

As a final example, I didn’t like my last sentence. ChatGPT gave me another version:

It should be used to relieve the burden on academics who are frequently asked to critique early drafts. Ideally, academics would primarily review more polished submissions that require minimal input.

I didn’t like that version either. Maybe then I should just delete it? That is where the human side wins out over AI, your sense of self and how you want to sound when writing.

AI is here to stay (and take over the world if the Terminator documentaries are anything to go by), but actually, that is alright (AI being here as a tool, not taking over the world). At levels 4 and 5, we need to find creative ways to incorporate AI tools into the assessment process (which may mean fewer essays). Levels 6, 7 and 8, it’s about using AI to help improve our own voice but not lose it. 

The ChatGPT re-write: How ChatGPT Revolutionized My Writing Process – The AI version.

Credit Image: Photo by rupixen.com on Unsplash

Image in the text created by T. Langston using imgflip.com

WiseFlow ePortfolio – Unlocking the power of WiseFlow: Transforming ePortfolio assessments

In the digital age, traditional paper-based portfolios have given way to ePortfolios, harnessing a powerful way to showcase a student’s work that demonstrates their learning, progress, reflections and achievements, over a period of time. ePortfolios are increasingly becoming popular in education as they offer several benefits to both students and academics.

For students, ePortfolios provide an adaptable platform to showcase their learning journey, including their best work and reflections on when it didn’t go quite to plan, and draw on evidence from a range of sources whether that be PDFs, images, videos, audio snippets or written text. This process helps students develop their metacognitive skills and self-awareness as learners over a period of time.  Academics, on the other hand, can use ePortfolios to assess students’ learning outcomes in a more comprehensive and authentic manner. In turn, this allows academics to gain insights into students’ thought processes, identify their strengths and weaknesses, and provide targeted feedback. Additionally, ePortfolios allow academics to track students’ progress and provide evidence of their achievements.

Using ePortfolios also builds several skills, including digital literacy, communication and critical thinking – all of which are vital in the modern workplace. Students have to select, curate, and present their work in a clear and engaging manner. They also have to reflect on their learning process and map this to learning outcomes. These skills are crucial for success in the modern workplace, where digital communication and collaboration are essential. With a background in teaching vocational courses for 12 years at Further Education level, I’ve seen first-hand the impact and outcomes of effective ePortfolio use for both students and academics. 

At Portsmouth University, we have struggled to find a solid ePortfolio solution. We currently use a popular open-source ePortfolio platform that allows students to create and share their digital portfolios. While the platform has several benefits, including flexibility, customizability, and integration with other systems, it also faces some challenges. One major issue is its user interface, which can be overwhelming and confusing for some users – particularly in the setup stage of having to import the portfolio into your own profile. This process often leads to a lot of technical issues and puts up an immediate barrier to entry for those not tech-savvy. Additionally, the learning curve for using the platform can be steep, and it may take some time for users to become familiar with all the features and functionalities. However, despite these challenges, academics and students value the use of the ePortfolio system on offer and the benefits this provides.  

We are currently coming towards the end of our first stage of a pilot with a new system: WiseFlow. This is a cloud-based digital end-to-end exam and assessment platform that supports the assessment and feedback lifecycle for students, assessors and administrators. It’s fair to say that staff feedback about the WiseFlow pilot has been overwhelmingly positive. As a core project team, we’ve had the pleasure of working with academic teams to support students with innovative assessments in Wiseflow, across a range of disciplines. This all links to our Digital Success Plan to (re)design robust assessments to meet the needs of the diverse student population within a blended and connected setting and incorporate a robust specialist end-to-end assessment platform. Our aims in the project were to make it easier for academics to design assessments, easier for students to find their assessments and feedback, and reduce the manual workaround assessments for academics and support staff.  All of which, WiseFlow seems to have been able to deliver. 

Within the pilot, we wanted to really push the boundaries of WiseFlow – utilising a wide range of assessment types to really test if WiseFlow can become the go-to platform for assessments at Portsmouth University.  One of the big challenges for us was to find an ePortfolio solution that is user-friendly, and adaptable across a range of disciplines as well as providing a versatile feedback loop where students could receive formative feedback on their work from assessors and develop ideas, prior to final submission. After challenging the team at WiseFlow to this – they came back with a solution. Block arrows showing the timeline of a flow ePortfolio for Online Course Developers, eLearn Team, Students and Academics

Traditionally, a FlowMulti (just one of the many ‘flow types’ WiseFlow offers for assessment) would be used for open/closed book multiple-choice exams, where the participants fill out a provided multiple-choice test.  However, the team at WiseFlow suggested we could utilise this functionality to use as a bespoke ePortfolio solution.

Using a FlowMulti allowed us to replicate the layout and design of current ePortfolios as well as allow us to adapt the setup to truly take ePortfolios to the next level. To create the feedback loop, we allowed assessors early access to the work, early release of feedback to students, and students to submit unlimited times before the deadline.  The portfolios could be easily updated year-on-year, were inviting for students to engage with, and could be authored by multiple academics at the same time. This seemed like the perfect solution.  

After testing, adapting and re-testing, we felt this solution offered a totally new level of ePortfolio to our current offering. The ability to re-purpose traditional multiple-choice questions allowed us to push the boundaries of assessment further, like never before. The only limitation is our own creativity to adapt and repurpose these. We put together a showcase of a PGCert portfolio to show our academics the findings, who immediately fell in love with the platform and we started working together to develop a portfolio to run within the pilot.

“As a course team, we are incredibly excited about the flexibility that the Wiseflow ePortfolio has to offer. Working with the project team we have been able to design a summative assessment vehicle which is both intuitive for learners and versatile enough to encompass a broad range of tools which enable the course Learning Outcomes to be demonstrated in an engaging and meaningful way.”  Dr Joanne Brindley, Academic Practice Lead & Senior Lecturer in Higher Education.

A screenshot of an image taken from a computer of the page that the participant will see asking them to reflect on their skills. A screenshot of an image taken from a computer of the page that the participant will see. The image is of empty drop box for the participant to upload their activity into it. A screenshot of an image taken from a computer of the page that the participant will see asking them to type up a reflective statement on the using of Technology to support learning.

We are now in the “participation” phase of two ePortfolios – one for the Research Informed Teaching module and one for the new Level 7 Teach Well: Principles to Practice professional development module.  We have had great experiences re-designing pre-existing portfolios to really push the boundaries of what is possible in WiseFlow. We’ve added interactive elements, by turning traditional questions and approaches on their head – such as using a histogram for reflection, allowing students to visually reflect on skillsets pre- and post-observation. We’ve provided students freedom of choice with assessment by integrating a voice recorder into the portfolio and also utilising existing platforms to integrate into the WiseFlow portfolio. Really, the only limitation is our own imagination.  

“We teach the PG Cert Higher Education so our students are staff. The platform is incredibly user-friendly for both staff and students. We used it for ePortfolio as the last platform created lots of complaints, whereas this platform has led to lots of compliments.  The staff members spoke highly of the platform and I believe, many have asked to be part of the pilot next year due to their positive experience.”  Tom Lowe, Senior Lecturer in Higher Education

There has been overwhelmingly positive feedback from academics and students regarding the usability and functionality of using WiseFlow as an ePortfolio solution.  Through word of mouth and firsthand experiences from early career academics, particularly those who are studying on the Research Informed Teaching module, the platform’s potential in enhancing their own teaching has become widely recognized.  I remember being invited to one of Tom’s lectures to showcase the platform to his students who would be using it and the response was overwhelming. Staff were excited to use this as students and saw the immediate potential for their own teaching. It is always a good sign of a new innovation when there is an immediate benefit to both staff and students that can be applied instantly in the classroom. Essentially, we now have a waiting list for academics who are wanting to work with us to develop ePortfolios in WiseFlow – with no advertising at all and purely from those who have used it as students. We believe that when this is advertised, we will see a huge influx of academics wanting to use this. We have also spoken to other Universities in WiseFlow user groups, who are actively keen to explore this and want to learn about our innovative approach. The potential of this solution is game-changing, not just for us, but for other Higher Education institutions. 

However, using an innovative approach and essentially turning a quiz assignment on its head does not come without some drawbacks that need to be considered before academics embark on an ePortfolio solution within WiseFlow.  There is currently a 12-file limit, set at 10Mb per file when students upload files into the portfolio. Although it is great that students can do this, it does not lend itself to modern file sizes or some of our subject areas (for example, our Creative and Cultural Industry Faculty, where students would regularly upload large .psd, CAD files, HD video, and high-quality audio etc). In our initial pilot, we haven’t encountered this issue – but it’s worth considering if this is the correct way to proceed with an assessment. The limit on the number of files is also a concern. For example, some students in our pilot have reached the 12-file upload limit. While there are workarounds, such as storing files in a Google Drive folder and sharing the link or combining multiple files into one, however, it defeats the purpose of an ePortfolio as an all-encompassing system. Perhaps, a better approach would be to have an upload limit as a whole, with a defined combined file size.  The final consideration to make is that once the ePortfolio is live, we cannot make changes.  We’ve worked extensively with academics and our support teams to iron out any issues prior to release, but again, this is important for academics to understand. Careful planning and consideration must take place in the authoring phase of an assignment, which will then be rigorously checked prior to release – in the same way an exam would. Despite these setbacks, we’re actively in discussions with WiseFlow regarding developing this and hope to make progress on these in the near future. 

The future of ePortfolios in WiseFlow is exciting, and we can’t wait to see how they will continue to be developed across the University. The ability to adapt and transform ePortfolios will open up new doors for our students and academics to really develop the ways in which students can showcase their knowledge and understanding. We’re hoping for a successful run of ePortfolio use within our pilot and looking forward to developing new ideas as we move into the future.  

Until next time. Watch this space.

Chris.

The AI Podcast – How we did it!

(Co-writer: ChatGPT)

There’s no doubt Artificial Intelligence (AI) will have a huge impact on the future of education and assessment.  I can’t think of a day that has gone past over the past few months where some form of AI chat hasn’t started within the office, often causing a healthy level of debate.  From the outrage of academics that students are using AI to generate degree-level work to the supposed “arms race” of technology companies to be the first to detect AI.  There’s no end to topics of conversation on the moral, technical, ethical and legal issues surrounding AI.    

With the Tel Tales Podcasts due to re-launch in the coming months, we thought it would be good to put AI to the test.  Could we create an entire podcast episode from AI?  Would it matter that it was AI generated?  Is this even possible?

To start the podcast off, we gave ChatGPT an open-ended initial prompt as follows…

“Write 6 questions to ask ChatGPT in a podcast about the impact ChatGPT will have on education”

Within a matter of seconds, ChatGPT was processing through its algorithms and writing a response to our initial prompt.  In less than a minute, ChatGPT responded with the following questions…

  1. How can ChatGPT be integrated into educational institutions to enhance the learning experience for students?
  2. What are the potential benefits and drawbacks of using ChatGPT in education, and how can these be mitigated?
  3. In what ways can ChatGPT be used to personalize learning for students with different learning styles and abilities?
  4. How can ChatGPT be used to facilitate communication and collaboration among students and teachers in online learning environments?
  5. How might the use of ChatGPT impact the role of teachers in education, and what skills might educators need to develop to effectively work with this technology?
  6. Could you conclude and summarise the impact that ChatGPT will have on education?

Looking at the response that was given, the questions that ChatGPT generated are certainly a good starting point, although very surface-level.  One of the biggest critiques of ChatGPT is the lack of “nuance, critical-thinking skills or ethical decision-making ability that are essential for successful journalism” (Lock, 2022).  Arguably, the questions generated cover a range of topics related to the use of ChatGPT in education that could lead to interesting and informative discussions.  ChatGPT is impressive on a surface level, but the reliance on using a machine for interaction or to generate conversations could lead to a loss of genuine human connection. The ability to connect with others through meaningful conversation and interactions is a fundamental aspect of being human.  The concept of outsourcing this to a machine could have detrimental side effects on our well-being and social skillset.  We learnt the true value of human interaction during the COVID pandemic and the severe impact the lack of meaningful human interaction had on mental health (Okabe-Miyamoto, 2021).  Without delving too deep into the content and ethical issues around this, the process took seconds to complete, with a total lack of human interaction apart from a very basic prompt.  Quite a scary thought!

These responses were then fed back into ChatGPT one at a time with the prefix “Write a 200-word response for a podcast to the question”, the results of this can be found at the bottom of this blog post.  Again, ChatGPT has provided plausible-sounding answers to its own questions.  However, it’s nothing new that we haven’t spoken about before, or anything game-changing.  Essentially, it’s just a series of ideas strung together from datasets.  We see themes that come up in nearly every answer around the “personalisation of learning” for students and how we can use AI to “engage” students.  The overemphasis in the responses on ‘buzzwords’ really highlights the lack of real input from a human; someone who is living and breathing this – an academic, a student, a head of school, a support team, a principal etc. Can we blame ChatGPT for its response?  Not really.  It’s done a pretty impressive job of answering its own questions and has generated some plausible responses, some of which will be popular with teachers – such as the reduction of marking load to allow teachers to focus on 1:1 learning with their students.  A glimmer of hope perhaps that ChatGPT acknowledges that a partnership between AI and teachers needs to be developed to unlock its full potential.  Maybe the machines are not taking over, just yet!

“With the right approach and training, ChatGPT can help usher in a new era of personalized, inclusive, and effective education.”  

After gaining our content, we used Speechify, to generate the audio.  Speechify has a range of voice actors – everything from Snoop Dogg to Gweneth Paltrow and uses AI to produce a natural voice, sensing the tone and intonation of text. Although the prospect of having Snoop Dogg; albeit a slightly static and stiff version of the OG himself, hosting our podcast would be amazing, we opted for British voice actors ‘Michael’ to host the podcast and ‘Stephanie’ to be the guest.  In recent news, David Guetta has spoken out about the use of AI in music and that “the future of Music is within AI”. There’s no doubt that this process of bringing AI-generated text ‘to life’ is taking the moral and ethical issues of AI writing and voice generation to a new level.  Could it be the next big “deep fake”; where AI voices are being used instead of the human equivalent?  Or that AI voices become so indistinguishable from the real human nuance of speech that it blurs the lines between reality and the matrix?  Either way, the results were good and we’re now starting to hear the results of our podcast.

After we had gathered our audio, we placed the audio files into Logic Pro X and used iZotope’s Ozone plugin for mixing and balancing.  The software uses AI to intelligently listen to the audio and suggest recommended settings.  We left everything with the settings that Ozone recommended for dynamics and equalisation – based on the audio files generated from Speechify.  It appears that gone are the days of building relationships with artists in a recording studio, akin to George Martin to refine and craft sound, when all that is needed is for an AI to “listen” and suggest settings, built on the foundations of those masters before. The final piece of the puzzle was to get some advertising for the podcast.  We asked ChatGPT to “write a tweet for @telportsmouth to advertise a totally AI-generated podcast for TelTales including hashtags” which has been used to advertise the podcast on our social media platforms.  We also asked Dall-e 2 to generate some AI artwork, with the prompt “AI-generated podcast from Tel Tales at Portsmouth University”, which you’ve probably seen before getting to this blog post. Did you even notice it was AI generated?  Had we not told you, would you have known? 

We loved creating this podcast.  In fact, it’s brought many more conversations into the office around the use of AI and how far we can (or should) push it.  Ultimately, ChatGPT is shaking the education sector to the core and making us re-evaluate our assessment methods – which is only ever a good thing!  ChatGPT is great at stringing a few sentences together to come up with a response.  However, is it not just generating content for content’s sake, with no real meaning?  Would you have even noticed this podcast was completely AI-generated, had we not told you?  Could we have made it more believable if we had “real-life” voice actors to narrate the script?  The answers ChatGPT gave for our podcast were plausible and spoke about relevant ideas and topics of discussion – but nothing more than that.  

With GPT4 on the horizon, will we see a ‘bigger and better’ version of this AI wizardry?  GPT4 promises to be able to respond to both text and images.  In theory, this multi-modal model should help the AI to understand the world we live in better and theorise more logical connections between datasets.  Perhaps this is only the beginning of our AI journey in education and further supports our need as educators to reevaluate our assessment methods.  As we’ve said before, this can only be a good thing!

Have we just done ourselves out of a job?  Probably not.  It is worth remembering that none of this was possible without that initial prompt and a “human” in the driving seat.  Does the future belong to AI? Not just yet, but it’s certainly one interesting journey we’re on!

Listen to the AI podcast 

Chris Wood

Follow me on Twitter @ChrisWoodTeach

References

Samatha Lock. (2022). What is AI chatbot phenomenon ChatGPT and could it replace humans? The Guardian. [Online]. 5th Dec 2022. Available at https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans [Accessed 14th March 2023].

Karynna Okabe-Miyamoto, Sonja Lyubomirsky. (2021). Social Connection and Well-Being during COVID-19. WHR 2021. 1(1), p.Chapter 6. [Online]. Available at: https://worldhappiness.report/ed/2021/social-connection-and-well-being-during-covid-19/ [Accessed 14 March 2023].

Artwork: AI-generated through Dall-e 2

Content: AI-generated through ChatGPT

Voice: AI-generated through Speechify

Mixing: AI-generated through iZotope Ozone

ChatGPT Responses

Question 1 Prompt:

Write a 200-word response for a podcast to the question “How can ChatGPT be integrated into educational institutions to enhance the learning experience for students?”

Question 1 Answer: 

ChatGPT, as a language model, has the potential to revolutionize the way students learn by providing personalized and interactive educational experiences. One way to integrate ChatGPT into educational institutions is through the use of chatbots. Chatbots powered by ChatGPT can provide students with instant feedback on their assignments, answer their questions, and provide additional resources that are tailored to their individual needs.

Moreover, ChatGPT can be integrated into learning management systems to create personalized learning paths for students. By analyzing a student’s learning history and performance data, ChatGPT can suggest customized learning resources and activities that are most relevant to the student’s interests and abilities. This can help students stay engaged and motivated as they learn.

Another way to integrate ChatGPT into educational institutions is by using it to create virtual learning assistants. These assistants can help students navigate complex topics, provide additional explanations or examples, and even engage in an interactive dialogue with students to promote deeper understanding and critical thinking.

Overall, the integration of ChatGPT into educational institutions has the potential to enhance the learning experience for students by providing personalized and interactive learning experiences that meet their individual needs and interests. By leveraging the power of AI, educational institutions can provide a more effective and engaging learning experience for students, preparing them for success in the modern world.

Question 2 Prompt:

Write a 200-word response for a podcast to the question “What are the potential benefits and drawbacks of using ChatGPT in education, and how can these be mitigated?”

Question 2 Answer: 

The potential benefits of using ChatGPT in education are numerous. ChatGPT can help personalize the learning experience for students, offer instant feedback and support, and assist with knowledge acquisition and retention. Furthermore, ChatGPT can assist in the creation of more engaging and interactive learning materials, which can improve student motivation and learning outcomes. However, there are also potential drawbacks to using ChatGPT in education. One concern is that the technology could lead to overreliance on AI, causing students to disengage from the learning process and rely too heavily on ChatGPT to provide answers instead of engaging in critical thinking and problem-solving. Additionally, there is the possibility of biased or inaccurate responses from ChatGPT, which could negatively impact learning outcomes. To mitigate these potential drawbacks, it is important to ensure that ChatGPT is used in conjunction with traditional teaching methods to encourage critical thinking and promote deeper learning. It is also important to ensure that the technology is thoroughly tested and evaluated for accuracy and fairness. Finally, it is important to provide appropriate training to both educators and students to ensure that they understand the role of ChatGPT in the learning process and can use it effectively. With these measures in place, the potential benefits of ChatGPT in education can be realized while mitigating potential drawbacks.

Question 3 Prompt:

Write a 200-word response for a podcast to the question “In what ways can ChatGPT be used to personalize learning for students with different learning styles and abilities?”

Question 3 Answer: 

One of the most promising applications of ChatGPT in education is its ability to personalize the learning experience for students with different learning styles and abilities. ChatGPT can analyze a student’s learning history and performance data to provide tailored resources and feedback that meet their individual needs.

For students who are visual learners, ChatGPT can generate images, videos, and other multimedia resources to enhance their learning experience. For students who are auditory learners, ChatGPT can generate audio recordings or podcasts. For students who are kinesthetic learners, ChatGPT can suggest interactive activities or simulations that allow them to engage with the material in a more hands-on way.

In addition, ChatGPT can provide targeted feedback and support that is customized to a student’s strengths and weaknesses. For example, if a student is struggling with a particular concept, ChatGPT can generate additional practice problems or explanations to help them master the material.

Overall, the ability of ChatGPT to personalize learning for students with different learning styles and abilities has the potential to significantly improve learning outcomes and engagement. By leveraging the power of AI, educational institutions can create a more inclusive and effective learning environment that meets the needs of all students, regardless of their learning style or ability.

Question 4 Prompt:

Write a 200-word response for a podcast to the question “How can ChatGPT be used to facilitate communication and collaboration among students and teachers in online learning environments?”

Question 4 Answer: 

ChatGPT can play a vital role in facilitating communication and collaboration among students and teachers in online learning environments. One way ChatGPT can achieve this is by creating virtual assistants that can provide real-time feedback and support to students. These virtual assistants can also help teachers by answering common questions, grading assignments, and providing additional resources to students who are struggling.

Moreover, ChatGPT can be integrated into online discussion forums and chat rooms to promote collaboration and engagement among students. ChatGPT can help students connect with one another and facilitate group discussions and peer-to-peer learning.

ChatGPT can also be used to create virtual study groups that can help students collaborate and learn from one another. By analyzing students’ learning histories and performance data, ChatGPT can suggest study partners or groups that are most likely to benefit from working together.

Finally, ChatGPT can be used to facilitate communication and collaboration between teachers and students. ChatGPT can help teachers provide real-time feedback to students, answer questions, and provide additional resources as needed. This can help create a more supportive and engaging learning environment that promotes learning and academic success.

Overall, the use of ChatGPT in online learning environments has the potential to significantly enhance communication and collaboration among students and teachers, creating a more engaging and effective learning experience for all involved.

Question 5 Prompt:

Write a 200-word response for a podcast to the question “How might the use of ChatGPT impact the role of teachers in education, and what skills might educators need to develop to effectively work with this technology?”

Question 5 Answer: 

The use of ChatGPT in education has the potential to significantly impact the role of teachers. With the introduction of AI-powered virtual assistants, teachers can offload certain tasks like grading, answering common questions, and providing additional resources. This can free up more time for teachers to focus on providing personalized and individualized support to students.

However, with the introduction of AI-powered technology like ChatGPT, educators will need to develop new skills to effectively work with this technology. Educators will need to become proficient in using and managing AI tools to create effective learning materials and assessments. They will need to understand how to use ChatGPT to personalize the learning experience for students, as well as how to integrate it into their teaching practices to facilitate communication and collaboration among students.

Moreover, educators will need to develop skills in data analysis to better understand student performance and adjust their teaching accordingly. They will need to be able to interpret the data generated by ChatGPT and use it to provide more effective feedback and support to students.

In summary, while the introduction of ChatGPT has the potential to significantly impact the role of teachers in education, it also presents an opportunity for educators to develop new skills that can help them create a more engaging and effective learning environment for their students.

Question 6 prompt:

Write a 200-word response for a podcast to the question “Could you conclude and summarise the impact that ChatGBP will have on education?”

Question 6 answer:

In conclusion, ChatGPT has the potential to revolutionize education by providing a personalized and engaging learning experience for students. It can help educators offload certain tasks and enable them to focus more on individualized support to students. It can also facilitate communication and collaboration among students, promoting a more inclusive and supportive learning environment.

However, the use of ChatGPT in education also presents some potential drawbacks, such as the risk of over-reliance on technology and the potential for bias in the data used to personalize the learning experience.

To mitigate these drawbacks, educators must be trained in how to effectively integrate ChatGPT into their teaching practices and be able to interpret and analyze the data generated by the technology. It’s also essential to continue to monitor and address any potential issues related to bias and ethical concerns.

Overall, ChatGPT has the potential to significantly enhance the learning experience for students and support educators in creating more effective and engaging teaching practices. While there are potential drawbacks that must be considered, the benefits of using ChatGPT in education are too significant to ignore. With the right approach and training, ChatGPT can help usher in a new era of personalized, inclusive, and effective education.

As a sidenote, I used ChatGPT to create the tweet to accompany my blog post, please see below. The possibilities with ChatGPT are endless!

ChatGPT AI-Generated Tweet

🎙️ Listen to the new Tel Tales podcast generated entirely by AI 🤖🎧 Join us for fascinating stories and insights about AI and beyond. #AIGenerated #TelTales #PortsmouthPodcast #TelTalesPodcasts 🌊🎉

AI and Higher Education: Is it time to rethink teaching and assessment?

On 22 February I took part in a roundtable debate on the topic “AI and Higher Education: Is it time to rethink teaching and assessment?”, the event being organised and facilitated by Graide, a UK-based Ed Tech company that uses AI to provide improved feedback in STEM subjects. (I dislike the term ‘artificial intelligence’ in this context, but I think I am fighting a losing battle here. In the interests of clarity, I’ll use the term AI in this blog post.) 

Given the recent furore around generative AI, and its ability to create human-like outputs, Graide thought it would be timely to bring together a variety of voices – senior managers, academics, developers, students – to discuss the potential impact of this new technology on higher education. I was joined on the panel by Bradley Cable (student at Birmingham University); Alison Davenport (Professor of Corrosion Science at Birmingham University); Ian Dunn (Provost of Coventry University); Manjinder Kainth (CEO of Graide); Tom Moule (Senior AI Specialist at Jisc); and Luis Ponce Cuspinera (Director of Teaching and Learning at Sussex University).     

It was fascinating to hear the range of opinions held by the panel members and by the 400+ people who attended the event (and who could interact via polls and via chat). If you are interested in my opinion of the technology then you might want to watch a recording of the debate; alternatively, in the paragraphs below, I’ll attempt to summarise my feelings about Bing, ChatGPT, and similar programs.

* * *

It is easy to see why there should be fears about this technology, particularly around assessment: students might pass off AI-generated content as their own. Critics of the technology have numerous other, entirely valid, concerns: the models might produce biased outputs (after all, they have been trained on the internet!); companies will presumably start to charge for access to AI, which raises questions of equity and digital poverty; the output of these models is often factually incorrect; and so on and so on.

But this technology also possesses the clear potential to help students learn more deeply and lecturers teach more effectively. 

I believe that if we embrace this technology, understand it, and use it wisely we might be able to provide personalised learning for students; design learning experiences that suit a student’s capabilities and preferences; and provide continuous assessment and feedback to enable students themselves to identify areas where they need to improve. The potential is there to provide at scale the sort of education that was once reserved for the elite. 

Note the emboldened if in the paragraph above. To obtain the outcome we desire we need to embrace and explore this technology. We need to understand that the output of large language models relies on statistical relationships between tokens; it does not produce meaning – only humans generate meaning. And we need to use this technology wisely and ethically. It is not clear at this point whether these conditions will be met. Instead, some people seem to want to shut down the technology or at least pretend that it will have no impact on them.

I have heard numerous academics respond to this technology by demanding a return to in-person, handwritten exams. (Would it not be better to rethink and redesign assessment, with this new technology in mind?) I have even heard some lecturers call for a complete ban on this technology in education. (Is that possible? Even if it were, would it be fair to shield students from tools they will have to use when they enter the workforce?) 

* * *

Fear of new technology dates back millennia. Plato, in the Phaedrus, a work composed about 370 BCE, has Socrates argue against the use of writing: 

“It will implant forgetfulness in their [the readers] souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”

Ironically, we only know about Plato’s argument against writing because it was written down.

More recently, some critics argued that the introduction of calculators would impair students’ mathematical ability. (The research is clear: children’s maths skills are not harmed by using calculators – so long as the devices are introduced into the curriculum in an integrated way.)  Even more recently, some people argued that spellcheckers would impair students’ ability to spell correctly. (It seems the reverse might be the case: students are getting immediate feedback on spelling errors and this is improving their spelling.)

Perhaps it is a natural human response to fear any new technology. And in the case of generative AI there are legitimate reasons for us to be fearful – or at least to be wary of adopting the technology.

But the technology is not going to go away. Indeed, it will almost certainly improve and become more powerful. I believe that if we are thoughtful in how we introduce AI into the curriculum; if we focus on how AI can support people to achieve their goals rather than replace people; if we produce a generation of students that use the technology effectively, ethically, and safely – well, we could transform education for the better.  

Credit Image: Photo by Stable Diffusion 2.1

TEL in ’22 – and looking forward to ’23

(Co-writer: ChatGPT)

In 2022 the TEL team said “goodbye” to some valued colleagues, who moved to take up different roles within the University, and we said “hello” to new colleagues who joined us. Chris, Jo, and Mike have already introduced themselves on TEL Tales, so I would like to use this end-of-year post to discuss a couple of work-related highlights: our implementation of Moodle 4.0 and, regarding the key area of assessment and feedback, our pilot of the WiseFlow end-to-end assessment platform.

Moodle 4.0

Moodle 4.0 is the latest version of the Moodle learning management system, and it includes many new features and improvements that aim to enhance the user experience and support better learning outcomes. Some of the key improvements in Moodle 4.0 include:

  • A new and improved user interface: Moodle 4.0 features a redesigned and modern user interface that is more intuitive and user-friendly, and that provides easy access to the most important features and functions.
  • Enhanced learning analytics and reporting: Moodle 4.0 includes improved learning analytics and reporting tools that provide teachers with more detailed and actionable insights on students’ learning, allowing them to track their progress and identify areas for improvement.
  • Improved accessibility and support for mobile devices: Moodle 4.0 has been designed to be more accessible and user-friendly for users with disabilities, and it includes support for mobile devices, allowing students to access their learning materials and activities on the go.
  • More options for personalization and customization: Moodle 4.0 provides teachers and administrators with more options for personalization and customization, allowing them to tailor the learning environment to the specific needs and preferences of their learners.

Overall, Moodle 4.0 is a significant improvement over previous versions of the learning management system, and it offers many new features and enhancements that can support better learning outcomes and a more engaging and effective learning experience.

At this point I would like to ask the reader: did you notice anything unusual about my discussion of Moodle 4.0?

Moving on, another major project for the TEL team has been to support a pilot implementation of the WiseFlow end-to-end assessment platform. Our hope is that a dedicated platform will allow us to improve our practices around assessment and feedback. Let’s explore that idea below in a little more detail.

Assessment and feedback

There are many different ways to assess students, and the best approach will depend on the specific learning goals and objectives, as well as the context and needs of the learners. Some key principles and strategies that can help to ensure effective assessment of students include:

  • Align assessment with learning goals: The assessment of students should be closely aligned with the learning goals and objectives of the course or programme. This will help to ensure that the assessment is focused on the most important and relevant learning outcomes and that it provides valid and reliable information on students’ progress and achievement.
  • Use a variety of assessment methods: Different assessment methods can provide different types of information and insights into students’ learning, and it is important to use a range of methods in order to get a comprehensive picture of their progress and achievement. Some common assessment methods include tests, quizzes, projects, presentations, portfolios, and observations.
  • Provide timely and meaningful feedback: Feedback is an essential component of assessment, and it is important to provide students with timely and meaningful feedback on their progress and performance. This feedback should be clear, specific, and actionable, and it should help students to understand their strengths and weaknesses, and identify areas for improvement.
  • Engage students in the assessment process: Students should be actively involved in the assessment process, and they should be given opportunities to reflect on their own learning, evaluate their progress, and set goals for improvement. This can help to foster a growth mindset and a sense of ownership and responsibility for their own learning.

Overall, effective assessment of students requires careful planning, the use of a variety of assessment methods, timely and meaningful feedback, and student engagement in the assessment process.

Again, at this point I would like to ask the reader: did you notice anything unusual about my discussion of assessment and feedback?

I inserted those italicised questions above because I (Stephen Webb) did not write any of the text in the two subsections. My “co-author” (ChatGPT) wrote the text.

In 2022, GPT3 (Generative Pre-trained Transformer 3) became a mainstream technology. At least a dozen apps are available that can provide clear, comprehensible text-based responses to prompts provided by a user. An app such as ChapGPT can write essays (and poems, and computer code, and much else besides). And many of our students will know this.

Similar technology can generate artwork. The artwork accompanying this blog post is an original by DALL-E, from a prompt I provided. DALL-E is a much better artist than I can ever hope to be.

I hesitate to call this technology Artificial Intelligence because it is neither artificial nor intelligent; I prefer the term Assistive Computation. Whatever we call the technology, though, I believe that in 2023 we need, as educators, to start grappling with its implications. Can we reasonably assess students by setting them essays to write, when apps can spit out the essays for them? What does the technology mean for notions of authorship and plagiarism? (You will notice that I have credited ChatGPT as being the co-author of this blog post; is that reasonable or meaningful?) What are the best strategies for learning with this technology?

Of course, teachers as well as students can use Assistive Computation. As an experiment, recently, an academic asked ChatGPT to generate a question in a particular knowledge domain that could be set for a first-year undergraduate essay. (The question it generated was entirely suitable.) The academic then asked ChatGPT to generate a rubric for the question. (The rubric was acceptable.) Then the academic asked ChatGPT to write an essay based on the question it had set. (The essay showed no insight, made no new points but would have received a passing grade.) Finally, the academic asked ChatGPT to mark the essay according to the rubric. (The feedback, if nothing else, was at least provided in a more timely fashion than human teachers can provide.)

This experiment suggests to me that either (i) we cut out the middlemen (students and teachers) and let AI do whatever it wants; or (ii) we work together – both students and teachers – to understand how to use Assistive Computation to improve teaching and learning. 

There are many questions here. And I believe 2023 is the year we all need to start grappling with them. Another question: should educators be concerned about this technology? I’ll let my co-author answer:

It’s natural for educators to be concerned about the potential impact of technology on their profession. However, it’s important to note that systems like ChatGPT are not capable of replacing the role of a human educator. While they may be able to generate text, they do not have the ability to understand or evaluate the content of that text in the same way that a human can. It’s also worth considering that the use of such systems may actually highlight the importance of critical thinking and analysis skills, which are vital for success in education and beyond. In the end, the role of educators will continue to be important in helping students develop these skills and prepare for the future.

I (the human, not the AI) would like to wish you an enjoyable festive period and a happy New Year!

Credit Image: An original by DALL-E

TEL in 2021

Twelve months ago I reviewed how TEL had navigated 2020, the strangest year I guess any of us have experienced. The TEL team, by implementing several new technologies and enhancing existing technologies, helped support the University’s pivot to what the literature now refers to as “emergency remote teaching” (ERT). Now, at the start of 2022, it is worth reflecting on what we learned during 2021 – a year in which Covid carried on posing problems.

The first point to make is that technology continued to be used heavily. As the University’s “blended and connected” approach to teaching and learning bedded in, and we experienced the welcome sight of students once again milling around on campus, I expected Moodle use to drop compared to last year. September 2021 did indeed see a drop in monthly users compared to September 2020. But almost the same number of users accessed Moodle in October 2021 as in October 2020. And 10% more users accessed in November 2021 compared to November 2020. In part this use pattern will have mirrored the waves of the epidemic, with online offering a safe environment for teaching and learning. But in part it shows, I believe, that technology has become embedded in teaching and learning, in a way that was not the case just two years ago.

The increasing use of Panopto provides another example. The last time I looked (which was six weeks ago; these figures will already be outdated!) staff had created 87,410 videos and recorded 35,442 hours of content. Students had racked up 2.23 million views and downloads. These are large numbers, and again they demonstrate that staff and students are engaging with technology in a way we could not have predicted two years ago.

Nevertheless, we need to ask: in 2021 did we fully embrace the opportunities offered by a blended and connected approach to teaching and learning?

I suspect the answer is “no”: to a large extent we were all still operating in ERT mode.

The reasons for this are understandable. It takes time to redesign a course or module so that students can get the most out of a blended and connected environment. Effective redesign takes the skills and experience of a mix of people. And the process requires support from professional services. That broad, team-based approach to the redesign of courses and modules has not been part of the culture at Portsmouth – so although it is possible to point to numerous individual examples of good, innovative practice, I believe the University as a whole has been unable to take full advantage of a blended and connected approach.

One of my hopes for 2022 is that we will see a much more considered use of technology in teaching and learning. In some cases that will mean more technology, in some cases different technology, and in some cases less technology. The key is to identify the best blend of activities to ensure students can learn and can demonstrate mastery of that learning. In other words, I hope in 2022 we will see much more emphasis on learning design.

In order to further this ambition TEL, AcDev, and Faculty colleagues, working under the leadership of Professor Ale Armellini, are developing enABLe – a framework based on well established and well researched principles, but one that is new to Portsmouth. The intention is to offer structured and collaborative workshops, at the course or module level, around learning design (and learning re-design). These collegiate, student-focused, needs-driven workshops are flexible: they can be used for new programme development, for programmes needing attention around learning and teaching as flagged in the EQUIP process, and for programmes simply requiring a refresh in a specific area such as feedback. In each case, the workshops are founded on the key principles of Active Blended Learning. If you would like to learn more, please contact Sarah Eaton.

At some point the pandemic will become endemic and, as politicians tell us, we will “learn to live with the virus”. But when that happens we should take care not to forget the lessons – both positive and negative – of 2020 and 2021. It would be foolish for us to try to return to our teaching practices of 2019. Amanda Gorman, the poet who read at President Biden’s inauguration, ends her latest poem, New Year’s Lyric, with the following lines:

“So let us not return to what was normal,

But reach toward what is next…”

I think that is a perfect sentiment for education in 2022.

 

Zoom vs Webex vs Google Meet: Important Features to Compare and Consider

There is a growing need to deliver lectures online and capture online live lectures with the new Blended and Connected learning approach, which was recently introduced by our University to help our students in this challenging time. The Blended and Connected learning approach is driven by our Content Capture policy which emphasizes that “…all students should have access to learning from all formal teaching sessions regardless of whether they are able to physically attend.” Furthermore, with the regulatory requirement that our content needs to meet the Public Sector Bodies (Websites and Mobile Applications)(No. 2) Accessibility Regulations 2018, it is now more important than ever to ensure that our online taught content is captured and captioned. 

The University has a few online conferencing tools available to meet this need. They are all easy to use with some differences between them. This document compares and considers important teaching and learning features in each tool to hopefully help you find the best tool to use for your needs.

On top of that, the University has acquired Panopto which helps to pre-record lectures and has a built-in auto caption facility. It is advisable that all formal teaching sessions, including online live lectures, are captured, uploaded and stored in the Panopto server for caption, security and storage purposes.

But of course, it is almost impossible for any caption, generated by automated speech recognition tools, to achieve 100 per cent accuracy. So, please do check and edit your captions to ensure the accuracy, and add a disclaimer to the video or module site. 

The following text has been officially signed off by Harriet Dunbar-Morris, Dean of Teaching and Learning.

Currently our video content, delivered via Panopto, uses closed captions generated using automated speech recognition. This is about 70–80% accurate depending on the topic, audio quality and lecturer. We advise you not to rely solely on the captions when reviewing your video content. Make sure you check the captions against the PowerPoint and readings to ensure you have a good understanding of the topic and the terminology used. If you are reliant on these captions due to a disability or additional learning need then please contact ASDAC for advice on additional support for your study.

Below is the list of online conferencing tools available in the University and their important learning and teaching features to compare and consider.

Zoom

Zoom is the new online conferencing tool that the University has acquired to support our Blended and Connected learning environment.

Screen or application sharing

  • Zoom allows hosts to easily share their device screen with meeting participants. There’s an option to choose between sharing the complete screen or to share a specific app or document such as Powerpoint or MS word. 
  • Hosts can enable or disable the participant’s ability to share audio, video, and the screen in the meeting.

Mute/Unmute

  • Hosts have a distinct level of control during the meeting. As the host or co-host in a meeting, you can manage your participants, including muting and unmuting participants, to manage background noise and distractions.

Breakout rooms

Remote control

Storage and Caption

  • All Zoom sessions recorded with the ‘cloud recording’ option are automatically uploaded to Panopto and captioned. Please check your captions to ensure the accuracy and add a disclaimer to the video or module site. 

Webex

Webex is a web conferencing and video conferencing application by Cisco Systems.

Screen or application sharing

  • Hosts can easily share their device screen with meeting participants. There’s an option to choose between sharing the complete screen or to share a specific app or document such as Powerpoint or MS word. 

Webex Meeting Center

    • Participants can share the content in the meeting by simply selecting ‘Share’ on the Quick Start tab, Menu Bar and Floating Icon Tray. This feature is enabled by default to make it easier to change presenters in a meeting and improves the overall collaboration experience. If hosts want more control over sharing, they can disable this feature while the meeting is in progress by selecting ‘Participant’ -> uncheck ‘Anyone Can Share’.

Screenshot showing that anyone can share

Webex Training Center

    • By default, the host has to designate participants as the presenter before they can share contents or screen. Hosts can reclaim the presenter role or change the presenter at any time by right-clicking the name on the Participants panel, including the host themselves, then select Change Role To -> Presenter.

A screenshot of how to change the role to presenter

Mute/Unmute

Webex Meeting Center

    • For Webex Meetings, you can choose to prevent attendees from unmuting themselves until you grant them permission to. 

Webex Training Center

    • When you mute participants in Webex Training, only you can unmute them, they can’t unmute themselves.

Breakout rooms

Remote control

(To assign this privilege, while in the session -> select ‘Participant’ -> ‘Assign Privileges’ -> Control shared applications, Web browser, or desktop remotely).

A screenshot of the label showing where to assign privileges

To remote control another participant’s screen in the meeting, the host needs to assign them the presenter role then request them to grant remote access control to the host.

Storage and Caption

  • All recorded Webex sessions should be downloaded and uploaded to Panopto for storage and captioning. Please check your captions to ensure the accuracy and add a disclaimer to the video or module site. 

Webex Meeting Center 

Webex Meeting Center session is recorded in mp4 format. It can be easily uploaded to Panopto.

Webex Training Center

Webex Training Center session is recorded in a Cisco specific .arf format. It needs to be converted to mp4 before uploading to Panopto.

Google meet

Previously known as Hangout, is Google’s video conferencing solution.

Screen or application sharing

  • By default, all participants can share their whole screen or a specific window, or chrome tab during the meeting. Through screen sharing, one can share documents, spreadsheets, presentations, and more. When you share a Chrome tab, it shares the tab’s audio by default. Hence, you can play an online video as well.
  • Hosts can prevent participants from sharing their screen but they need to update this setting for every new call, even if it is a recurring event. 

To prevent sharing:

  1. Join a Meet video call.
  2. At the bottom, click ‘Host controls’.
  3. Turn off ‘Share their screen’.

Mute/Unmute

  • Anyone in a Google Meet conference can mute anyone else.
  • If you mute someone, no one in the conference will be able to hear them.
  • When you mute someone, everyone will be notified that you have muted the person.
  • You cannot unmute someone once they have been muted. They have to unmute themselves. Google says this is for privacy reasons.

Breakout rooms

Remote control

  • Google Meet does not have a built-in option to give remote control over to another person. However, Google has provided a tool that can be used right alongside Google Meet to accomplish this. That tool is called Chrome Remote Desktop.

Storage and Caption

  • All recorded Google meet sessions should be downloaded from Google Drive and uploaded to Panopto for storage and caption. Please check your captions to ensure the accuracy and add a disclaimer to the video or module site. 

 

Credit Image: Image by Hatice EROL from Pixabay

Zoom Vs Meet Vs WebEx

With the new term fast approaching it can feel pretty daunting with all this new terminology and jargon to learn. Within the TEL team, we have received quite a few queries from staff about what’s the ‘best’ technology to use when it comes to online video conferencing and meetings, particularly when it comes to Zoom, Meet and WebEX. It’s a good question! The short answer is all of them, which doesn’t help I know. So firstly we advise to think about what you want to ‘achieve’, what’s the purpose of using the technology, is it to create an online classroom within Moodle? Is it to create an event, with conference-style presentations? Or is it just for a one-to-one chat? All have pros and cons which we have tried to outline in this post below. We hope you find it useful!

Let’s get started!

There are many articles that cover what each system does and the benefits of each one, some of which are featured below: 

From the University’s perspective, we have licences for all 3. So where articles talk about the limitations of a free licence, they do not apply to us for Zoom, WebEx or Meet. 

My quick summary of each tool from a “Virtual Classroom” and learning and teaching perspective is this:

Zoom
  • Can add a classroom directly in Moodle
  • Has video-enabled breakout rooms
  • There is no link between Moodle and the creation of the breakout sessions. 
  • 300 users with no time limit
  • Can be password protected
  • Has a waiting room
  • Can be set to allow access before the host
WebEx
  • The biggest issue with WebEx is it is actually four products with each version offering something different. 
    • WebEx Meeting – For Meetings, has no breakouts
    • WebEx Training Centre – For teaching, has breakouts but poor video share (most geared towards a virtual classroom with permissions etc)
    • WebEx Events – For conference-style presentations
    • WebEx Support – For providing a supportive environment.
  • Can add a classroom directly in Moodle
  • Has audio-only breakout rooms using WebEx Training centre.
  • 1000 users with no time limit
  • No waiting room
Google Meet
  • Not designed as a classroom
  • No admin features
  • No breakout sessions
  • No Moodle integration
  • Loads of plugin features such as Tactiq for a live transcription which saves to your Google Drive.

What I would like to offer, however, is the ability for you to share your experiences of each system, both positive and negative, so that a more realistic and natural picture can be shared between academic staff.

Made with Padlet

Remember we, the TEL team, are here to support you to get to grips with these new technologies, so please feel free to contact us at elearn@port.ac.uk if you do have a query. We also offer TEL Me How Surgeries twice quickly where we can go over any questions or queries you may have. The 2-hour sessions focus on supporting staff with the transition to online delivery and offering guidance around alternative forms of online assessment, please book your place here.

Credit Image: Photo by NordWood Themes on Unsplash

Into the unknown – part 2

Digifest (#digifest20) as a conference is awe inspiring, Jisc really know how to create that wow factor on entering the central auditorium. It was an area divided between trade stands, a village green and a futuristic stage. Next to it housed a massive screen that projected holographic messages signposting exhibits like AR, as well as when sessions were due to start. 

The first two sessions of Digifest were thought provoking and relevant to what we are all facing, a greater need to work online and provide a digital solution to our traditional working practices. Unsurprisingly enough, this is even more relevant now! Since I wrote part 1, we have gone through a seismic shift in learning and teaching, and had to adapt at a rapid pace to the new ways of working. 

This ties nicely to the third session that I attended called Digital Imposter Syndrome in Pracademia. We are all now facing a new way to interact with colleagues, students and our families. The fact that in the not so distant past, people would shy away from attempting new ways of using technology, yet are now being forced to change and adapt. This session had the perfect message for our current working environments. 

Just give it a go! It might fail. If it does … so what? 

We are a diverse community of practitioners and academics that are rallying, more than ever, to provide support and resources for each other and our students. 

The previous worry and the point I would have made, had I written this when Digifest was fresh in my mind, was that our students know more than us. That might well be true in certain technological areas, but actually, this is also a challenging time for them too. We are in a prime situation for students to give us their feedback, which can only benefit us and them in collaborating going forward.  This idea of digital support and digital co-creation is something that the TEL team are happy to discuss so please let us know if this is something you are doing or want to know more about (you can start with me tom.langston@port.ac.uk or the general help email elearn@port.ac.uk ).


At the time I made this tweet, it summed up nicely where we stand today. At the moment quick wins are the name of the game, being adaptable and using new tools to try something new. 

Back to the wider Digifest angle, and each session I attended, offered new and creative ideas for teaching. Harlow College provides their students with an iPad for their studies and with it they are creating digital scrapbooks to help with dementia patients in the community, writing and directing drama performances for the community around evocative subjects like cyber-bullying. It lets the students’ creativity flow through all their studies and is not fixed to traditional technical subjects. This is key when thinking that ‘they’ know more than we do. We, as academics, understand that the generations surrounding us have different skills to offer, and to ignore that is only going to slow innovation. If we develop why we want to use the technology and think about the pedagogic rationale, maybe the students can run with the theme and ideas and inspire us in how we work, assess and challenge our previous norms. 

Digifest was an amazing space to share ideas and hear about innovations in teaching that are surprisingly easy to implement. So far I have written only around day 1, day 2 was equally fruitful. The final blog post in the series will look at the highlights of day 2 and what we can do going forward with the enforced digital revolution that we are all now part of.

« Older posts

© 2024 Tel Tales

Theme by Anders NorénUp ↑