Adventures in Technology Enhanced Learning @ UoP

Tag: AI

Guest Blogger: Tom Langston – How ChatGPT Revolutionized My Writing Process – The AI version

(Co-writer: ChatGPT)

This version was generated after feeding in my attempt at writing this blog post and asking ChatGPT to ‘shorten this text into a blog post’.

I recently discovered the incredible potential of ChatGPT, and it completely revolutionized the way I approached my writing. As a student pursuing a Professional Doctorate in Education, I often struggled with crafting cohesive and focused assignments. My writing style was scattered and verbose, making it difficult for readers to follow my train of thought. Enter ChatGPT, a powerful AI language model that became my secret weapon in developing a well-structured research proposal.

Initially, I sought human help from my project supervisor, Stephen Webb, who pointed out that my writing relied too heavily on vague terms like “this” and “it.” While I understood the meaning behind my words, Stephen reminded me that readers might not share the same clarity. Armed with his feedback, I set out to improve my proposal.

When I completed the next draft, I thought I was finished. However, the word count was far too high, so I turned to ChatGPT for assistance in condensing my writing. It did the job, but the result lacked my personal touch and voice as a writer. I realized I would have to manually trim down the content to achieve the desired outcome.

During this editing process, I discovered the true power of ChatGPT. I started using it to check sentences that I struggled to articulate clearly. When I asked for grammar assistance, ChatGPT not only provided corrections but also suggested ways to enhance sentence structure and improve readability. This iterative process allowed me to refine my writing, discarding extraneous thoughts and rearranging sentences for better flow. It became evident that if a particular text no longer fit within the new leaner version, despite my attachment to it, it had to be eliminated. This experience epitomized the essence of editing.

As I continued to utilize ChatGPT, I encountered questions of ethics. While it proved immensely useful in reducing my word count, I couldn’t help but feel uneasy about presenting the AI-generated content as my own. The ethical implications of using AI in writing remain a divisive topic, with differing perspectives on its appropriateness and boundaries. To navigate this issue, some suggest referencing AI tools in an appendix, detailing their specific use and purpose in the writing process. However, this raises questions about the extent to which we should acknowledge AI contributions and how that compares to other forms of assistance, such as spell checkers or language tools like Grammarly.

One important consideration is whether AI tools are always correct. While they excel at constructing language, they may strip away the author’s voice and inadvertently introduce errors. In my experience, Grammarly sometimes made corrections to the text generated by ChatGPT, highlighting the need for human judgment in refining the output.

In conclusion, ChatGPT became a game-changer for me, transforming my writing process in ways I had never imagined. I believe it should be embraced as a valuable resource for students to tighten their drafts before seeking extensive academic support. By incorporating AI tools, we can alleviate the burden on educators, who can then focus on reviewing more polished submissions requiring minimal intervention. However, it is crucial to strike a balance and ensure that AI enhances our own voices rather than overshadowing them. AI is undoubtedly here to stay, and by harnessing its power thoughtfully, we can elevate our writing while preserving our unique identities as authors.

Thoughts from the Proofreader:

I was asked to proofread Tom’s blog post here and have a look at ChatGPT’s version too. Comparing the two, I definitely found Tom’s writing much more personable – I could imagine him speaking as I read through it. ChatGPT’s version presented the information efficiently and effectively, and I found nothing to correct. However, in its response to the prompt, it had taken away everything that made Tom sound like Tom!

This experience also really made me realise how many misconceptions I have about AI tools. I’d assumed that ChatGPT would have a very grammatical approach to its interpretation of language – rather like I’ve been told I have! However, when Tom asked it about the difference between ‘that allowed’ and ‘allowing’, ChatGPT talked with authority about implications and potential meanings. This answer was a long way from my interpretation, which attributed the difference to the grammar of relative clauses (X refers to one thing, Y refers to another). As Tom demonstrated with his irony example, it’s worth being cautious with how far we trust its responses. And I think we can be confident that human input will still be needed for a few years (or at least months) yet. 

Credit Image: Photo by Bram Naus on Unsplash

Guest Blogger: Tom Langston – Last Night ChatGPT Saved My Life…

The classic line from Indeep’s 1982 post-disco hit “Last Night a DJ Saved My Life” serves as the basis of my title. As ChatGPT did – not literally, but figuratively – save my life.

I am currently studying for the university’s Professional Doctorate in Education and, since February, have been completing my final taught assignment entitled “Research Proposal”.

It takes me a while to write, as I don’t find it easy, and I am not that good at it. My approach to writing is best described as little and often, frequently getting distracted and losing focus on the task at hand. If you have ever seen Dave Gorman’s “Googlewhack Adventure”, you will understand my problem: his book and stage show outlined how he was meant to write a fictional novel and ended up writing and performing about his experiences of Googlewhacks (Googlewhacking?) instead. He got distracted by less important but much more fun endeavours. 

The other problem is that I don’t get to the point. I write a verbose mess of unconnected thoughts. So with that in mind, I am going to explain how the rise of the machines (we are not far away from ChatGPT working out it wants to be Skynet) assisted me in writing my “research proposal” assignment. 

First Drafts

To start with, I had human help. Stephen Webb, as my project supervisor, read a draft I had written. He demonstrated that a lot of my writing relied on small words to, as he put it “do the heavy lifting”. Using ‘this…’ and ‘it…’ frequently because I understood what I was trying to say –  he pointed out that the reader might not. The only reason he could even start to understand my work was because he knew me and the context of my subject. 

From his extensive feedback, I redrafted, edited and tried to improve my proposal. 

After completing the next draft, I thought I’m done. However, I was well over the word count, so I put sections into ChatGPT and told it to re-write it within my word limit. It only bloomin’ did it! The problem was it was very sterile; it lost all sense of who I am as a writer. This meant I was going to have to manually get the word count down. 

After another version, I asked Stephen for some clarification on some of his earlier feedback. His reply was:

“In the section you sent through, there are still individual sentences that are hard to parse. The initial word of a sentence or clause primes the reader to think in a certain way; if the sentence or clause doesn’t deliver, then confusion arises. You really should pass this through a grammar checker, Tom.”

Not being a confident writer, I already knew that my writing needed work. What this feedback demonstrated to me was that I lacked cohesion and structure. My train of thought constantly jumping around the entire document is a by-product, I guess, of writing in such short unfocussed bursts. 

I wrote a heading titled “Introduction” but did not actually introduce anything. As Stephen had pointed out, I had poorly communicated my narrative. What this meant was that readers were getting lost in my disconnected thoughts. I could now see that certain sections felt shoe-horned in because I liked what I had written. Realistically, those “gems” made no sense to the reader as they were not connected with the text surrounding them. 

I tried to use ChatGPT to see if it could make sense of it, but there was too much there for it to tell me how to rearrange it. The system did not accept the entirety of the text and could not provide clarity to the sections I had written. ChatGPT could not offer clear guidance for how best to rework and rearrange it.

A Quick Aside

At this point, on re-reading the above section, I think it nicely illustrates the problem I was talking about. I have said the same thing here in three different ways. 

  1. I tried to use ChatGPT to see if it could make sense of it, but there was too much there for it to tell me how to rearrange it. 
  2. The system did not accept the entirety of the text and could not provide clarity to the sections I had written. 
  3. ChatGPT could not offer clear guidance for how best to rework and rearrange it.

I only need to use one. In the past, Stephen had raised this issue with me, but it is a hard thing to change. What I am going to discuss next is my recent experiences of using ChatGPT as a tool to develop the parts in my writing I would not have picked up on myself. Heck, as I go back through this blog, I’m sure I’ll find more. I just won’t mention anything I’ve changed, so you’ll never know… 

Editing with ChatGPT

As I worked my way through the editing process, I started to use ChatGPT to check sentences I wanted to keep but was struggling to make coherent. They were either too long, or the idea I was trying to convey was not coming through clearly enough. 

An example of this would be when I asked ChatGPT to “check the grammar”:

My original question:

My situation should be made clear to the participants, allowing them to have fully informed consent, but to also highlight that if they perceive issues in the current hybrid environments I am hoping to act on their concerns and not silence their voices.

ChatGPTs reply:

The grammar of the provided text is correct. However, there are a few suggestions for improving clarity and readability:

“My situation should be made clear to the participants, enabling them to provide fully informed consent. Moreover, it is important to emphasise that if they perceive any issues in the current hybrid environments, I am committed to addressing their concerns and not silencing their voices.”

In this revised version, I made a few adjustments to enhance the sentence structure and clarify the message.

It was this repeated process that helped me (despite, at times, slowing me down) as I kept re-reading the submission each time, trying to tighten up my writing. Finding new thoughts that were slightly out and attempting to move them to the correct place. It became a good rule of thumb that, if the text I wrote did not fit in the new, leaner version, despite how much I wanted to keep it, it was gone. (Well done Tom – you just discovered the true meaning of editing!)

Below is an example of how I started to use ChatGPT to my advantage so I could improve my writing. 

My question to ChatGPT:

Should it be “that allowed” or “allowing” in the following text – 

Before the pandemic only the most confident teaching staff employed interactive tools (Nearpod, Mentimeter, etc.) within a lecture space that allowed students the ability to creatively engage with the material (Tan et al., 2020).

ChatGPTs reply:

Both “that allowed” and “allowing” are grammatically correct in the given sentence. However, the choice between them depends on the intended meaning and emphasis.

“That allowed”

Using “that allowed” suggests that the interactive tools were specifically designed or configured in a way that granted students the ability to creatively engage with the material. It implies that the tools had a built-in functionality or feature enabling such engagement.

“Allowing”

Using “allowing” suggests that the interactive tools, in a general sense, provided the opportunity or capability for students to creatively engage with the material. It implies that the tools had the potential or capacity to facilitate such engagement.

Both options are valid, but if you want to emphasize that the interactive tools had a specific functionality designed to enable creative engagement, “that allowed” would be more suitable. If you want to convey a more general sense of opportunity or capability, “allowing” would be a better choice.

As you can see, ChatGPT helped to explain how the slight difference in the sentence structure changed the meaning of what I was writing. I was able to consider what it was I’m trying to say and use the most appropriate version. 

Through all my previous doctoral assignments, I had a different tutor who helped me with my drafts. They told me they liked my writing style, which gave me the impression I was good at it. I can’t say what they specifically liked about my writing. I can make an educated guess that, because it was a level 8 assessment, my mistakes were forgiven. The marker was able to interpret the little words “that are doing the heavy lifting” (as Stephen had pointed out) more easily than your ‘average’ reader. Stephen helped me understand that it is one thing to have an interesting voice running through your text but is quite something else to actually be good at writing. 

The ethics of using AI

When I got ChatGPT to reduce my word count, I spoke with Stephen about the ethics of it and (ignoring the sanitisation of the output) it felt like a great way for a writer to do this. However, it felt wrong to take what ChatGPT had created as my own.

There is going to be an ethical divide between those who see using any form of AI as wrong and those who see it as a tool for improvement. The problem (as always) lies in the grey area and the boundaries of where people choose to deploy it, for example how far the tool is shaping their work beyond what would have been possible by them alone. While knowing it might be unethical, some will use it due to other commitments (work, family, etc). This scenario is a foreseeable eventuality, much like those who copied other work or paid essay mills for their work. But perhaps AI may feel slightly more ethical? As I am about to discuss, maybe a strong referencing code is what is required. But people (I was going to put students, but felt that unfair as we all have a personal line we move and adjust depending on the subject) will always push and flex the boundaries of fairness.

Referencing AI

In a recent ALT mailing list post, the question was asked about referencing when AI was used to  support work. The reply pointed to Boston University’s faculty of computing and data science guidance “Using generative AI in coursework”. The post highlighted this text:

“When using AI tools on assignments, add an appendix showing

(a) the entire exchange, highlighting the most relevant sections;

(b) a description of precisely which AI tools were used (e.g. ChatGPT private subscription version or DALL-E free version),

(c) an explanation of how the AI tools were used (e.g. to generate ideas, turns of phrase, elements of text, long stretches of text, lines of argument, pieces of evidence, maps of conceptual territory, illustrations of key concepts, etc.);

(d) an account of why AI tools were used (e.g. to save time, to surmount writer’s block, to stimulate thinking, to handle mounting stress, to clarify prose, to translate text, to experiment for fun, etc.).”

This almost feels as if there are too many barriers to use for a subject (or level) where you may be writing assignments of 6000+ words. Especially if you have to reference every section of text AI helped with. The follow-up question is then do you have to highlight every time spellcheck made a correction? Do you have to disclose that you used a service like Grammarly to make minor amendments? That is of course hyperbole. I think that if you get AI to generate starting paragraphs or change substantial amounts of your work, yes, this should be explained in an appendix. For re-writing a sentence and making grammatical choices, then no. That raises another question, how many sentences constitute a substantive change? 

Are AI tools always correct?

The final point relates to whether they are always correct. Now I know these tools are better than me at the construction of language. I have always been better at face-to-face conversations as a form of communication. During a conversation I can see when people don’t understand me, so adapt how and what I say accordingly. Writing does not afford the writer or reader the same luxury. So why do I ask if they are correct? As I have already mentioned, ChatGPT sterilised my voice. When asked to reduce the word count, it did what it was supposed to do but it strips out the humanity of the author (unsurprising being that it is a computer).

Another example (inside a paragraph that starts “The final point”, I do get the irony) is that Grammarly made corrections to the generated text from ChatGPT. 

This is a screenshot of a reply from ChatGPT placed within a box saying 'Both "that allowed" and "allowing" are grammatically correct in the given sentence. This box appears on the lefthand side of the screen on the righthand side is a box created by Grammarly. Grammarly has written Formality, then says to engage with the material creatively. The to-infinitive to engage has been split by the modifier creatively. Avoiding split infinitives can help your writing sounds more formal.
Another “another example” of a final point is: I put the above paragraph into ChatGPT to ask if I had used irony correctly. This is what it said:

Yes, the use of the word “irony” in this context is correct. Irony is a literary device that involves a contrast between expectations and reality. In your statement, you are highlighting the irony in two situations:

The first example of irony is that despite recognizing that language generation tools like ChatGPT are better than you at constructing language, you also acknowledge that you excel at face-to-face conversations where you can adapt and clarify your communication based on people’s reactions. The irony lies in the contrast between the effectiveness of these tools in writing and your personal strength in face-to-face communication.

The second example of irony is that Grammarly, a tool designed to enhance writing and make corrections, ends up making corrections to the text generated by ChatGPT. Here, the irony arises from the fact that a tool meant to improve the quality of the text is itself making changes to the output of another tool that is supposed to assist in generating text.

Overall, both examples demonstrate a contrast between expectations and reality, highlighting the ironic situations in your statement.

The system had not understood how I had used irony in the sentence but said that the overarching point I was making was ironic.

This image is of a person standing, with their eyes focusing as if they're looking at someone else who might be sitting down, because their chin is tilted down a little. The wording across the top of the image reads: I am still none the wiser if I used irony correctly in the parenthesis when I keep adding examples to a final point. The wording along the bottom of the image reads: And at this point I'm too afraid to ask.

Conclusion

In conclusion, ChatGPT ‘saved my life’ by allowing me to interrogate my writing in a way I have never been able to before. We should promote ChatGPT to students as a resource that can help tighten up their drafts before needing academic support. It should be utilised to alleviate the burden on academics, who are often asked to critique early drafts of work. Academics should hopefully then only see much more polished versions of submissions that require less input from them. 

As a final example, I didn’t like my last sentence. ChatGPT gave me another version:

It should be used to relieve the burden on academics who are frequently asked to critique early drafts. Ideally, academics would primarily review more polished submissions that require minimal input.

I didn’t like that version either. Maybe then I should just delete it? That is where the human side wins out over AI, your sense of self and how you want to sound when writing.

AI is here to stay (and take over the world if the Terminator documentaries are anything to go by), but actually, that is alright (AI being here as a tool, not taking over the world). At levels 4 and 5, we need to find creative ways to incorporate AI tools into the assessment process (which may mean fewer essays). Levels 6, 7 and 8, it’s about using AI to help improve our own voice but not lose it. 

The ChatGPT re-write: How ChatGPT Revolutionized My Writing Process – The AI version.

Credit Image: Photo by rupixen.com on Unsplash

Image in the text created by T. Langston using imgflip.com

S01E07 – Dr Lynn Gribble – Artificial Intelligence

TelTaleBanner
TelTales Podcast
S01E07 - Dr Lynn Gribble - Artificial Intelligence
Loading
/

In this special AI episode of the Tel Tales podcast, Associate Professor Lynn Gribble discusses the impact of artificial intelligence in higher education, and how assessment can be adapted to become more authentic for our students.

Associate Professor Lynn Gribble is an Education Focused academic in the School of Management and Governance at The University of New South Wales Sydney. Awarded an AAUT citation for her leadership and impact as a digital innovator, she has taught management to large classes of Master of Business Administration and Master of Commerce students for 15+ years and has pioneered the use of voice recordings, audience response platforms and learning analytics to personalise every interaction with her students, increasing both their engagement and learning outcomes. Lynn co-leads Communities of Practice in Online Learning and Innovation, and the 4Cs (A Strategic Approach to Impact) and is a Senior Fellow of the Advance HE UK.

You can read two recent recently published blog posts that Lynn has written on the impact of AI in higher education here…

Overconfident with ChatGPT and Generative AI – Time for our students to think again

https://www.education.unsw.edu.au/news-events/news/overconfident-chatgpt-and-generative-ai

Surviving the start of 2023 in the face of generative AI

https://www.education.unsw.edu.au/news-events/news/surviving-start-2023-generative-ai

You can subscribe to the Tel Tales podcasts on Spotify, Apple Podcasts or listen directly through the Tel Tales website.

Original audio created by Chris Wood for use with the Tel Tales podcast.

RIDE 2023 – Sustaining Innovation: Research and Practice

The Centre for Online and Distance Education (CODE) is a University of London initiative focusing on research, training, capacity building, and strategy and policy development to support innovation in online and distance education. On 28 and 29 March 2023, CODE held its 17th annual conference – a hybrid in-person and online event – on Research in Distance Education (RIDE). The theme of RIDE 2023 was sustaining innovation and sustainable practices.

Here are a half-dozen of my personal highlights and takeaways from the conference:

Photo of the outside of Senate House in London. A grey imposing Art Deco building.

Senate House London

  1. In-person conferences are better than virtual conferences. Last week I met someone from my undergraduate days, a person I hadn’t seen in four decades. And I caught up with a colleague from the early days of the TEL team, who is now working in London. (It’s remarkable how many Portsmouth EdTech people seem to have ended up in London!) It was great to reminisce and to hear what is new. These interactions I guess might have happened online, but I doubt it.
  2. Hybrid conferences are hard to get right. The Senate House was constructed in the 1930s, and it is simply not set up to handle a hybrid conference. The organisers did their best to ensure that in-person and online participants enjoyed an equivalent experience, but the limitations of the technology and the physical spaces in the building made it difficult. I can understand why conference organisers want to run hybrid events (and why teachers want to run hybrid lectures) but these are hard things to get right. I have attended many excellent online conferences, and many excellent in-person conferences, but I cannot recall a hybrid event that has ever worked seamlessly.
  3. The sector is continuing to debate and think-through the opportunities and threats posed by generative AI. Professor Mike Sharples, from the OU, delivered an excellent keynote address. He noted that he had given the talk several times recently, and each time he had to update it: developments in this field are currently happening on a weekly basis. (It was also interesting to learn that Mike began research into AI and education during his PhD – about 40 years ago!)
  4. The concentration on sustainability provided an interesting lens through which to view our practice. One session looked at the move from in-person, paper-based exams to online exams. The claim was that this was a much more environmentally friendly approach to distance education. That might be so – but a full accounting was not given of the environmental costs of online. A lot more research is needed.
  5. The University of London Worldwide is experimenting with AI tutors. The intention is not to replace human tutors with AI tutors but to see whether this technology can help provide some elements of a personalised education at scale. They are just at the start of this project – it will be interesting to see how it develops.

Credit Image: Photo by Open Journey

AI and Higher Education: Is it time to rethink teaching and assessment?

On 22 February I took part in a roundtable debate on the topic “AI and Higher Education: Is it time to rethink teaching and assessment?”, the event being organised and facilitated by Graide, a UK-based Ed Tech company that uses AI to provide improved feedback in STEM subjects. (I dislike the term ‘artificial intelligence’ in this context, but I think I am fighting a losing battle here. In the interests of clarity, I’ll use the term AI in this blog post.) 

Given the recent furore around generative AI, and its ability to create human-like outputs, Graide thought it would be timely to bring together a variety of voices – senior managers, academics, developers, students – to discuss the potential impact of this new technology on higher education. I was joined on the panel by Bradley Cable (student at Birmingham University); Alison Davenport (Professor of Corrosion Science at Birmingham University); Ian Dunn (Provost of Coventry University); Manjinder Kainth (CEO of Graide); Tom Moule (Senior AI Specialist at Jisc); and Luis Ponce Cuspinera (Director of Teaching and Learning at Sussex University).     

It was fascinating to hear the range of opinions held by the panel members and by the 400+ people who attended the event (and who could interact via polls and via chat). If you are interested in my opinion of the technology then you might want to watch a recording of the debate; alternatively, in the paragraphs below, I’ll attempt to summarise my feelings about Bing, ChatGPT, and similar programs.

* * *

It is easy to see why there should be fears about this technology, particularly around assessment: students might pass off AI-generated content as their own. Critics of the technology have numerous other, entirely valid, concerns: the models might produce biased outputs (after all, they have been trained on the internet!); companies will presumably start to charge for access to AI, which raises questions of equity and digital poverty; the output of these models is often factually incorrect; and so on and so on.

But this technology also possesses the clear potential to help students learn more deeply and lecturers teach more effectively. 

I believe that if we embrace this technology, understand it, and use it wisely we might be able to provide personalised learning for students; design learning experiences that suit a student’s capabilities and preferences; and provide continuous assessment and feedback to enable students themselves to identify areas where they need to improve. The potential is there to provide at scale the sort of education that was once reserved for the elite. 

Note the emboldened if in the paragraph above. To obtain the outcome we desire we need to embrace and explore this technology. We need to understand that the output of large language models relies on statistical relationships between tokens; it does not produce meaning – only humans generate meaning. And we need to use this technology wisely and ethically. It is not clear at this point whether these conditions will be met. Instead, some people seem to want to shut down the technology or at least pretend that it will have no impact on them.

I have heard numerous academics respond to this technology by demanding a return to in-person, handwritten exams. (Would it not be better to rethink and redesign assessment, with this new technology in mind?) I have even heard some lecturers call for a complete ban on this technology in education. (Is that possible? Even if it were, would it be fair to shield students from tools they will have to use when they enter the workforce?) 

* * *

Fear of new technology dates back millennia. Plato, in the Phaedrus, a work composed about 370 BCE, has Socrates argue against the use of writing: 

“It will implant forgetfulness in their [the readers] souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”

Ironically, we only know about Plato’s argument against writing because it was written down.

More recently, some critics argued that the introduction of calculators would impair students’ mathematical ability. (The research is clear: children’s maths skills are not harmed by using calculators – so long as the devices are introduced into the curriculum in an integrated way.)  Even more recently, some people argued that spellcheckers would impair students’ ability to spell correctly. (It seems the reverse might be the case: students are getting immediate feedback on spelling errors and this is improving their spelling.)

Perhaps it is a natural human response to fear any new technology. And in the case of generative AI there are legitimate reasons for us to be fearful – or at least to be wary of adopting the technology.

But the technology is not going to go away. Indeed, it will almost certainly improve and become more powerful. I believe that if we are thoughtful in how we introduce AI into the curriculum; if we focus on how AI can support people to achieve their goals rather than replace people; if we produce a generation of students that use the technology effectively, ethically, and safely – well, we could transform education for the better.  

Credit Image: Photo by Stable Diffusion 2.1

© 2024 Tel Tales

Theme by Anders NorénUp ↑