Monday

ChatGPT and Assessment (Part 2)

 

Continuing the series on ChatGPT and Assessment (Part 1), this blog post looks at the impact the tool is starting to have on this area of language education and what teachers can do about it. 

Already, educators are starting to question whether the emergence of ChatGPT means the end of assessment as we know it. 

How can the integrity of assessments when students are able to use nefarious means to complete them?  asks learning elearning, stressing the importance of adding context to any assignment that is set and ensuring students take responsibility for their learning (easier said than done?). 

The blog links to a number of other posts that discuss the topic:

  •  FE News proposes using a variety of question types, different types of tests (e.g. oral presentations or practice-based assessments with students being observed) and more effective proctoring systems.
  • The Conversation thinks the emergence of the tool is an opportunity to rethink assessment altogether. They warn teachers who may be tempted to use ChatGPT for marking papers that it may be more likely to give higher grades to students who write in a style that it is more familiar with. Student cheating is discussed, with the blog post syaing if you are a teacher marking 200 pieces of writing work from students, then in all likelihood, you will probably pass those generated by ChatGPT. The challenge here is to make assessment more authentic, meaningful and useful, "measuring students' knowledge and skills in a way that is particularly tailored to their own lives and future careers." The author goes on to say that assessments that require applying knowledge for practical or problem-solving situated in a real, local context would be one way of dealing with this.
  • WonkHE suggests trusting students, asking whether AI advancements highlight a problem with assessment itself. The solution lies not around employing detection tools to try and spot when students are using AI since all of these tools are flawed and some even show human efforts as AI-generated. Neither is banning the tools an effective solution. They are here to stay. So what can teachers do? Well, in the real world, outside academia, people who work are going to be using these tools to save time and support what they do. The blog post argues we should be changing the face of assessment, designing assessment tasks that have academic integrity, but also in the knowledge that these tools exist and will (should) be used. It's mentioned that these tools often  "hallucinate" answers, providng responses that are grammatically correct, but inventing facts and spurious references. One suggestion of an interesting assessment design would be to allow students to start with what a tool such as ChatGPI generates and ask them to improve on it, perhaps by adding references that support the argument.
  • The focus of the Pulse post is that ChatGPT fails most parts of the assessment taks the author sets his students. The key is to stay clear of generic tasks and the key to understanding ChatGPT is that "it guesses" answers rather than "engage in any thinking." Over time it will make better guesses, but it doesn't think, which is why using the term 'artifical intelligence' to describe the tool is misleading. The author's solution is to ask students to develop lesson plans and to orally deliver part of the plan (in class or via video) and justify via commentary  the decisions for including what they put in the plan. The important thing here is to focus on the process rather than the finished product. ChatGPT can produce a finished product, the artefact. The other recommendation is to make the assessment tasks authentic, and notes that ChatGPT draws upon a lot of nonsense when it comes to formulating answers. When asking about learning, for instance, it produces answers acontaining "modality-based learning styles, right-brain, left-brain nonsense, and digital natives claptrap among a steady rotation of the greatest hits of misconceptions of teaching and learning." Designing assessment tasks that require critical thinking and asking students for references is important. 
  • The UTS article suggests ways in which students and teachers can draw upon AI to support their work., including asking students to use ChatGPT to generate a response to an assessment task and then set up the criteria to critique the response and provide feedback to improve upon it. This is an approach that understands these tools are here to stay and will be used, and shifts the focus to helping people make better use of them, promoting critical and digital literacy. 

This article published in the Journal of Applied Learning and Teaching asks if ChatGPT means the end of traditional assessments in higher education

In addition to providing comprehensive background information to Open Ai and ChatGPT, and reviewing the existing literature, the authors of the article tested the software with different queries and provide reflection on some of these. In particular, the table on strengths and weaknesses of the tool is interesting, and provides the following recommendations:

Generally,  we  advise  against  a  policing  approach  (that  focuses   on   discovering   academic   misconduct,   such   as   detecting the use of ChatGPT and other AI tools). We favour an  approach  that  builds  trusting  relationships  with  our  students in a student-centric pedagogy and assessments forand as  learning  rather  than  solely  assessments  of  learning  (Wiliam,  2011;  Earl,  2012).  The  principle  of  constructive alignment   asks   us   to   ensure   that   learning   objectives,   learning and teaching and assessments are all constructively aligned. 

One response to the emergence of this tool, aimed at trying to stop students from using it to complete assignments has been strategies to design assessment tasks that make it difficult for students to use ChatGPT to complete, for instance Outsmarting ChatGPT: 8 Tips for Assignments it can't do

This article suggests the following:

1. Ask students to write about something deeply personal 

2. Center a writing assignment around an issue specific to the local community   
3. Direct students to write about a very recent news event 
4. Have students show or explain their work
5. Ask students to give an oral presentation, along with the written work 
6. Return to a pre-digital age and ask students to handwrite their essays in class  
7. Put project-based learning to work  
8. Run the assignment through ChatGPT before giving it to students

 Some of these sugestions are better than others. When it comes to sugestion number six, the JALT article had this to say: 

A simple solution to the problem of students using ChatGPT would  be  to  use  physical  closed-book  exams  where  the  students write by hand, using only pen and paper (Cassidy, 2023)  –  for  online  exams,  proctoring/surveillance  software  can be used. However, such an approach to assessment (or at least an over-reliance on it) has been increasingly criticised as  no  longer  contemporary,  with  students  cramming  less-than-useful  information  into  their  heads,  only  to  forget  much  of  it  shortly  after  their  examinations  (Van  Bergen  &  Lane, 2016). With a focus on graduate employability, the skill to ace closed-book exams seems rather irrelevant.

How do I feel about the eight suggestions above? One thing I feel very strongly about is number 6. I don't think it is useful for teachers or students to return to using pens to produce their work. It ignores the fact that in the real world these tools now exist and are being used. What are we preparing students for if not how best to take their place in the real world?


 

 

 

No comments:

Post a Comment

The AI learning paradox

On his substack, Jason Gulya outlines a paradox: "Learning with AI tools suffers from a paradox. To use AI as an effective tool, learn...