ChatGPT: What it's not

 


I've recently seen a lot of posts on social media criticising ChatGPT: warning that it delivers factually incorrect responses, showing how you can trick it into giving the wrong answers to maths problems, complaining it makes up references to sources that don't exist. All these posts really show is the author's misunderstanding of what ChatGPT is and what it isn't.

The graphic above (by HFS Research) is useful. In another article, HFS Research write:

"Let’s not forget the platform is limited to the input the algorithm was trained on... it's a large language model, GPT-3 (Generative Pretrained Transformer 3)..."

 What does ChatGPT say about itself? It says: it was:-

…trained using a machine learning technique called unsupervised learning, which means that it is trained to generate text by predicting the next words in a sequence based on the ones that come before it, without the need for human-provided labels or annotations. This allows GPT-3 to generate text that is often indistinguishable from text written by a human. GPT-3 is considered to be one of the most advanced language processing models available, with many applications in natural language processing tasks such as language translation, text summarization, and question answering.

There are concerns that the ability of ChatGPT to generate conversational text means it has potential to generate fake news or other misleading content. As this article from Monash university states:

It’s also important to remember that the ChatGPT model does not have its own thoughts or opinions. It solely depends on the user and how they use it...it can be difficult for users and the third parties to understand how the model arrived at a particular output.

It is not a knowledge tool, not a search engine, although this type of software will surely soon lead to another where its outputs are validated and can be better trusted. This has already started with semantic search engines. This is undoubtedly going to lead to a different way of searching and a new way of searching for and creating content for the web. The Search Engine Journal suggests the following:-

Create content that clearly and concisely answers a common query at the top of the page before delving into more specific details. Make sure to use structured data to help search engines understand your content and context...It’s time to stop creating content around keywords. Instead, you should be thinking about broad topics in your niche that you can cover in-depth... Instead of creating dozens of short, disparate pages, each with its own topic, consider creating “ultimate guides” and more comprehensive resources that your users will find valuable. 
The goal here is to create comprehensive, original, and high-quality resources. Rather than returning factual data drawn from a variety of sources, large language models such as ChatGPT can generate text that seems coherent and credible, but which is actually false information.

ChatGPT only processes language, responds to the input by predicting the words most likely to follow in a sequence, generating text (not necessarily facts). On asking ChatGPT if it can replace traditional journalism, for instance, the tool quickly responds saying it is "a language generation tool, not a journalism tool" and is "not capable of replacing traditional journalism or the work of human reporters and journmalists."

It is then, perhaps most interesting for those of us who teach or help people learn languages. It can also be used to enhance the productivity and capability of any profession. 



Comments

Popular posts from this blog

Learning Design in the age of Generative AI

Using recorded Skype conversations as assessment tools

Digital Literacy - Gavin Dudeney