It has been a while since I last wrote here due to life events taking precedence. Meanwhile the world of educational technology has been buzzing about the advent of the AI application
ChatGPT. This is a more user-friendly upgrade of
GPT that I wrote about back in October,
The end of the essay? Ask
ChatGPT a question and you'll get a very plausible and often accurate answer, in most languages. There are lots of interesting reviews out there and we're all trying to digest the potential consequences this will have for education. For example have a look at a post by
Tony Bates,
Playing with ChatGPT: now I’m scared (a little), where he analyses the tool's answers to questions in his own field of expertise.
Of course the main topic for the media has been how students can use this to cheat without the risk of running foul of anti-plagiarism tools. Why do we always assume the worst in our students? I've seen calls to return to the exam hall with paper and pencil examinations (though some institutions have never left this scenario) and speculation on tools that can detect AI-generated text (it takes a thief to catch a thief I suppose). But these are futile attempts to stop the tide and it is vital that educators and decision makers keep up with the development of AI and find strategies to use it wisely. There are several good articles about how to use ChatGPT in your teaching, for example
Ryan Watkins',
Update Your Course Syllabus for chatGPT. Tips include asking students to fact check and improve on the tool's answers to questions and using other media for assignments such as mindmaps, podcasts and video at least until AI can generate even those!).
There are of course limitations with
ChatGPT. According to a presentation by
Torrey Trust,
ChatGPT and education, the tool is not actually connected to the internet and cannot access texts and information from later than 2021. I assume that this is a temporary inconvenience and we can count on more advanced versions and competitors very soon. In addition the tool collects data from all its users that can then be shared with third parties (surprise surprise!). So before using this with students you'll bneed to discuss the privacy implications, as well as the implications for their own learning.
However, there is a bizarre exception to the program’s automated smoothness. The part where things get slightly uncanny (and, in this writer’s opinion, more creepy) is when the chatbot makes mistakes. The reason this is so weird is that instead of just short-circuiting or spitting out an automated response like “Error” or “Please Try Again” when it doesn’t know something, ChatGPT actually makes something up. Often this thing is a rough approximation of the correct answer. This is weird because it’s also what humans tend to do, conversationally. Unsure of how to answer a particular question or address a topic, most people try to bullshit their way out of it. Well, so does ChatGPT.
In some cases it will admit its limitations, for example when asked to make predictions, summarise texts or make personal reflections, but many writers have found it worryingly good at providing flawed answers. The danger is that we trust AI too much and assume it knows everything. Using it in say health care, social work and employment would seem to be asking for trouble.
Then we have the thorny issue of copyright. Who owns the texts produced by AI and how can they be cited? Does the person who wrote the prompt have copyright, or the programmers who devised the tool, or the owner of the company?
Jonathan Binns (
The impact of AI on copyright law)
asked
ChatGPT what the impact of AI would be on UK copyright law and got this answer:
It is difficult to predict the exact impact that AI will have on copyright law in the UK. However, it is likely that the use of AI in creative industries, such as music and art, will raise questions about the ownership of works created by AI. There may also be discussions about whether AI itself can hold copyrights for works it creates. It is possible that the UK government will need to update copyright laws to address these issues.
AI is also making a big impact on creative industries like music. ChatGPT and other tools can write song lyrics to order in seconds in the style of any famous artist. Music generator tools are already producing songs, jingles and soundtracks and although they are hardly masterpieces they certainly threaten a lot of people's livelihoods, as described in an article in
Billboard,
What Happens to Songwriters When AI Can Generate Music? and in a video by
Rick Beato,
How Auto-Tune DESTROYED Popular Music. The point in Beato's video is that since so much popular music over the last 20 years has been increasingly computer-generated the jump to AI may not be noticed.
All this raises so many questions about the future of education and work and we are only glimpsing the start of AI's development. We have opened Pandora's box and, as in the myth, it will be impossible to close it again. There are many exciting uses but the opportunities for manipulating between fact and fiction are clearly enormous to the point that it may soon be impossible to distinguish between them. What happens when we can't trust anything? I am not confident that we are able to handle the genie we have created.