Tuesday

The AI learning paradox


On his substack, Jason Gulya outlines a paradox:
"Learning with AI tools suffers from a paradox. To use AI as an effective tool, learners need to check the outputs. To check the outputs, learners need to already have some expertise on the topic. Or they need to have the research skills they're just in the process of developing."

It rings true. As we all know, Large Language Models hallucinate. Their outputs are grammatically correct and seem coherent, but they can often be factuallu incorrect or nnonsensical. There are examples of professionals being caught out by this, such as the lawyers who submitted fake court citations because ChatGPT invented them

Even research tools such as NotebookLM has a prominent warning that says 'NotebookLM can make mistakes, so double-check it'.

Because they cannot always be trusted, LLMs are great assistants, but there is a great need to teach students to use critical literacy skills wheen using theem. 

No comments:

Post a Comment

The AI learning paradox

On his substack, Jason Gulya outlines a paradox: "Learning with AI tools suffers from a paradox. To use AI as an effective tool, learn...