Of course the main topic for the media has been how students can use this to cheat without the risk of running foul of anti-plagiarism tools. Why do we always assume the worst in our students? I've seen calls to return to the exam hall with paper and pencil examinations (though some institutions have never left this scenario) and speculation on tools that can detect AI-generated text (it takes a thief to catch a thief I suppose). But these are futile attempts to stop the tide and it is vital that educators and decision makers keep up with the development of AI and find strategies to use it wisely. There are several good articles about how to use ChatGPT in your teaching, for example Ryan Watkins', Update Your Course Syllabus for chatGPT. Tips include asking students to fact check and improve on the tool's answers to questions and using other media for assignments such as mindmaps, podcasts and video at least until AI can generate even those!).
There are of course limitations with ChatGPT. According to a presentation by Torrey Trust, ChatGPT and education, the tool is not actually connected to the internet and cannot access texts and information from later than 2021. I assume that this is a temporary inconvenience and we can count on more advanced versions and competitors very soon. In addition the tool collects data from all its users that can then be shared with third parties (surprise surprise!). So before using this with students you'll bneed to discuss the privacy implications, as well as the implications for their own learning.
Just because the tool produces plausible and accurate answers to a lot of questions it can also pick up biases from texts it analyses and in some cases can provide false or misleading information, even backed up by non-existent sources, as shown in an article by Lucas Ropek in Gizmodo, How to Talk to ChatGPT, the Uncanny New AI-Fueled Chatbot That Makes a Lot of Stuff Up.
In some cases it will admit its limitations, for example when asked to make predictions, summarise texts or make personal reflections, but many writers have found it worryingly good at providing flawed answers. The danger is that we trust AI too much and assume it knows everything. Using it in say health care, social work and employment would seem to be asking for trouble.
Then we have the thorny issue of copyright. Who owns the texts produced by AI and how can they be cited? Does the person who wrote the prompt have copyright, or the programmers who devised the tool, or the owner of the company? Jonathan Binns (The impact of AI on copyright law) asked ChatGPT what the impact of AI would be on UK copyright law and got this answer:
It is difficult to predict the exact impact that AI will have on copyright law in the UK. However, it is likely that the use of AI in creative industries, such as music and art, will raise questions about the ownership of works created by AI. There may also be discussions about whether AI itself can hold copyrights for works it creates. It is possible that the UK government will need to update copyright laws to address these issues.
AI is also making a big impact on creative industries like music. ChatGPT and other tools can write song lyrics to order in seconds in the style of any famous artist. Music generator tools are already producing songs, jingles and soundtracks and although they are hardly masterpieces they certainly threaten a lot of people's livelihoods, as described in an article in Billboard, What Happens to Songwriters When AI Can Generate Music? and in a video by Rick Beato, How Auto-Tune DESTROYED Popular Music. The point in Beato's video is that since so much popular music over the last 20 years has been increasingly computer-generated the jump to AI may not be noticed.
All this raises so many questions about the future of education and work and we are only glimpsing the start of AI's development. We have opened Pandora's box and, as in the myth, it will be impossible to close it again. There are many exciting uses but the opportunities for manipulating between fact and fiction are clearly enormous to the point that it may soon be impossible to distinguish between them. What happens when we can't trust anything? I am not confident that we are able to handle the genie we have created.
I'm still in the stage where I'm fascinated rather than afraid, thinking of all the questions I could ask it and receive a useful answer. I see some similarity to Google when the search engine was the early years. You could put in a search and basically find the answers you were looking for. Google searches these days lead mostly to advertisements. The fear that I feel is more related to what will happen when big businesses buy up these inventions, making them into something that controls the information we have access to in order to make everyone a customer.
ReplyDeleteChatGPT is already storing users data and has a tendency to make up answers if it can't find one, as the article I quote demonstrates. My next post includes a link to a nice summary of what we know so far.
Delete