Sunday, April 30, 2023

To fail is human - let's share and learn from it

Photo by Michael Dziedzic on Unsplash

Behind every success there are lots of failures. Papers that never got published, projects that didn't get funded, courses that flopped, examinations failed, opportunities missed and so on. It happens to everyone but is seldom talked about or analysed. The success cult promoted on sites like LinkedIn shows a steady stream of successful people doing great things (more often than not "awesome"). Rather than being inspired I have often been a bit depressed when scrolling through all the success stories. It's a similar feeling at conferences which are also celebrations of success. I don't mean that we shouldn't celebrate success, but there are also lessons to be learned from less successful activities, since we can all relate to them. We may not feel we have ever reached the successful heights of the best practice cases but we can all identify with schemes that didn't win any silverware. But it's so hard to get people to share those experiences - it takes courage to admit your failures. But we could learn a lot by sharing these examples and discussing how we could improve. Most importantly hearing that even the most respected educators have failed many times in the careers.

This is the gist of a nice article by Tracy Nevatte in Times Higher EducationLead by example and share your failures. She calls on senior academics to share their failures and how they contributed to later success. Many young researchers and teachers despair at repeated rejections and wonder if they are really cut out for a career in education and an opportunity to be rassured that everyone has felt like that at some point can be more inspiring than listening to stories of constant success.

We rarely see senior academics share their failures, either with each other or with those at the start of their careers, but their career trajectory is undoubtedly full of them. Do they not share these stories because they’re ashamed or, rather, do they not see them as failures in the first place? The latter seems more likely. Only when we normalise failure, and take the isolating power of it away, can failures equal success. But it’s going to take effort from early career researchers, research leaders, institutions and funders to get there.
There are indeed failure conferences, sharing experience and discussing how to improve. See Failcon for example. I've never managed to attend one but wish I had been able to. It's not easy, however, to attract speakers who are willing to talk about their less successful ventures and you certainly don't get any career points for doing so. Being a keynote speaker at a failure conference would not be something to post on LinkedIn. But we need to remove the shame and stigma and dare to share. Realising the even the top practitioners have a long string of flops behind them can reassure many who feel like giving up. By opening up like this and discussing our shortcomings we can also move away from the toxic overworking culture that has so often been spread on social media with people bragging about the unfeasibly long hours they spend working on their projects, papers, course design and project applications.

This post was written without any contribution from AI. I wonder if AI can discuss its own vulnerability ...

Saturday, April 22, 2023

Self-assessment of digitally enhanced learning and teaching - overcoming inertia

Photo by Ross Sneddon on Unsplash

The pandemic threw all educational institutions into the deep end of the educational technology pool. Adapting to what was for most institutions a relatively new form of teaching and learning was a traumatic but also transformative experience. In the wake of that experience the most obvious strategy was to take stock and make a thorough review of what worked, what didn't work and how to improve in terms of using digital technology. In an increasingly unstable and unpredictable world the likelihood of further crises is extremely high and therefore the need to ensure that education can quickly adapt.

There is no shortage of research, reports, guidelines, tools, webinars and conferences to help educational institutions improve their use of educational technology in teaching and learning. Organisations like the European Commission, EUA (European University Association), EDEN (European Distance and E-learning Network) and many others have run projects, produced reports and run dozens of webinars and conferences all based on extensive research but somehow they seldom result in major changes on the ground. It's not simply about the adotpion of technology, that is really not the main point, it is a change towards more inclusive and active forms of teaching and learning. It's about learning to learn by active involvement in meaningful collaborative work where technology is an enabling factor. But the main barrier is the reluctance to change from the traditional information transfer model that so many people feel comfortable with and which is perceived as effective and indeed symbolic of higher education.

An excellent way to move towards this is to look carefully at how technology is used in the institution today and how this contributes to a more holistic view of teaching and learning - a process of self-assessment. This has been the focus of a recent EUA project, DIGI-HE that I have been involved in (on the advisory board). The project has included numerous studies, consultations and thematic peer groups reaching a broad range of educational institutions and in various disciplines. One report in particular offers a comprehensive overview of the wide range of self-assessment tools available and advise on their use: Developing a high performance digital education ecosystem - Institutional self-assessment instruments.

Set against this prerogative and growing strategic interest, this report presents a review of 20 instruments from around the globe designed for self-assessment of digitally enhanced learning and teaching at higher education institutions. It offers a number of insightful observations concerning their use (or non-use) by institutions for promoting both quality enhancement and digital capacity development. It should be of immediate interest to higher education institutions, but also to policy makers, developers of instruments, and generally, to all those who seek information on such instruments.

The project also produced a MOOC on FutureLearnInside Digital Higher Education: Self-Assessment Guide for Educators. Here institutional leaders are taken through the process of reviewing the institution's current strategies and planning for a self-assessment, looking at both risks and opportunities. The course was run during the spring but is available as an asynchronous self-study course. This is a good springboard to kick-start a change process and the project's various reports provide further guidance and inspiration from institutions who have already started their transformation process. 

This is one example of the abundance of the guidance and support available for digital transformation and pedagogical development but as the saying goes: you can lead a horse to water but you can't make it drink. Despite the clear benefits of conducting a self-assessment there seems to be a great reluctance to do so, despite the lessons of the pandemic and the abundance of research into active collaborative learning. The first barrier is the abundance of tools that creates anxiety on which one to choose. Faced with too much choice we simply don't make a choice. I think we all experience feelings like this in our daily lives when faced with the myriad of choices available in everything between insurance to telecom providers. It seems that we all suffer from inertia when it comes to actions that threaten our comforable balance. 

Self-assesment also demands a lot of time and energy at a time when most people feel already stressed and overworked. It also risks exposing wasteful practices or inequalities in the present system and thus creating conflict. The pandemic was certainly disruptive (tragically so for millions around the world) and there were signs that we would need to rethink our structures and systems to adapt to new challenges. However, we seem to have simply reverted to old practices again without much reflection. Changing the way we live and work is too demanding so we return to the default. Thatä's why we can't expect too much of institutions to embark on such costly processes voluntarily (with a few exceptions). Governments and authorities need to help them find space and time for these processes and offer incentives for doing so. Then we can hopefully create some momentum that will generate interest and widen involvement.

Wednesday, April 12, 2023

Reading around the world, with a little help from my network

I read a lot; it goes with the job of course but even outside work I just keep reading. At the moment I'm busy with an extremely rewarding project to read at least one novel from every country I have visited, 56 in total. In view of the environmental impact of air travel, I can't hope for any more international travel unless overland, so I will now focus on travel in my own part of the world and appreciating my past travels. One way to do that is by reading.

The idea for my reading project came from Ann Morgan's inspirational book blog, A year of reading around the world, where she documents her quest to read a book from every country in the world in one year, all 195 of them - yes, even the Vatican City! I believe in setting achievable targets and decided to limit my total, but maybe once I've done them all I could just go on and see how far I get. To get the inside story of Ann's reading marathon you can watch the TED talk she gave a few years ago.

She reached out to her readers for tips on which books to choose and I decided to make use of my own network of educators around the world in the same way. I've written many times on the concept of personal learning networks and how my contacts have helped me in so many ways over the years, answering questions, recommending work literature and sharing practice. So this time I contacted them and asked for recommended reading from their countries. So most of my reading list has come from personal recommendations making the books even more special, reflecting both the country and the tastes of my friends.

I have also been a bit liberal with my definition of countries. Three of them are self-governing Nordic territories, Greenland, Faroe Islands and the Åland Islands, but they all have distinct histories and culture and deserve special status in my list. I have also included a country that no longer exists, East Germany (DDR), that I visited several times and also had its own literary culture far removed from that of  West Germany. Some countries like the UK, Ireland, Sweden, Finland, Norway, Denmark, France, Russia and Germany were well covered before I even started but I was surprised to discover that I had never actually read anything from countries like Spain, Portugal or Italy (apart from Roman authors from 2,000 years ago). I've now got 16 left before I reach my goal. The trickiest hurdle to clear will be Liechtenstein since as far as I can see has no novelists who have been translated into English. I have a basic knowledge of German but have never studied it and couldn't tackle a novel. Even Ann Morgan had trouble with this one and in the end read a travel book about Tibet by an author from Liechtenstein. I am restricted to reading books in English or the Scandinavian languages though maybe with a bit of patience I could manage one in French.

We tend to be very ethnocentric in our reading. Most people focus on authors from their own country or from the homes of the major publishers: the USA and the UK. Only when the Nobel prize is announced each year do authors from other countries get a chance to be in the spotlight. Just reading one book from a country doesn't give me much insight to its culture but at least I have opened the door. In many cases I have found other books that I will hopefully follow up in the future.

Another aspect of this activity is that I am affirming my love of printed books. I have a lot of packed bookshelves in the house and this project is filling them to the last centimetre. Of course I could save space and time by reading them as e-books or even audio books but then I couldn't really see my collection. My bookshelves are like a trophy cabinet in the same way my record collection used to be. My disenchantment with the digital tsunami has lead me to return to reading printed material, even the daily newspaper in the letterbox.

After the sadness of my previous post I have decided that I want to keep this blog going but widen its horizons outside the confines of educational technology. I don't intend to turn it into a book blog but I think I may include posts that reflect on my reading in the footsteps of my travels.

Saturday, April 1, 2023

Frozen in the headlights of AI

Photo by Eugene Triguba on Unsplash

This has been the longest break between blog posts since I started this in 2008. I've been busy with other things but I also have to admit that it's hard to find a topic that inspires me just now. My retirement last year has meant that I no longer spend hours reading reports, articles and news items in the field and I am not in direct daily contact with educators and researchers to provide input and inspiration. I am still taking short assignments but have no intention of returning to full-time work. A major reason for retiring early was that I realised that I had lost my enthusiasm for the field. Educational technology is all about big business and is dominated by a few global corporations profiting from all the data they acquire from students and teachers alike. Although there are still havens of openness and collaboration, most of the internet is controlled by the big five corporations and driven by greed. I'm not sure I want to continue encouraging the use of technologies that I'm feeling increasingly uncomfortable about. This theme has been well documented by Audrey Watters who after many years of exposing the myths and bluffs of the educational technology industry finally decided to leave the field completely and start a new life (see her present blog which today is about fitness and nutrition instead of technology).

I find myself frozen in the headlights of the AI juggernaut and realise that I don't have the curiosity and energy to find out more and test new opportunities. I see many colleagues presenting optimistic ideas for how we can use AI to benefit education and how tools like ChatGPT are simply the modern equivalents of the advent of the pocket calculator or the iPhone. Yes, there are certainly benefits to using AI in education as long as we do so with caution and as long as we have control over how the data gathered is stored and used in the future. But I can't see that happening when there are such overwhelming commercial interests involved. I see enormous potential for misuse in the form of surveillance, control, automation of skilled work and an explosion of fake news and propaganda. Stop the world, I want to get off.

I found some consolation reading Tony Bates' latest post, What are the main issues facing digital learning in the future?, where he announces that he will be scaling down his work in educational technology and citing AI as the insurmountable barrier. 

I could continue in the field and still contribute to the important but specific areas of online and digital learning, but AI is the deal breaker. I would have to work so hard to become expert in this area (and even then I may not have the mathematical skills), and it is now so critical to the future of digital learning that expertise and full understanding of AI and the issues around its use in post-secondary education and teaching are absolutely essential. I hope there are younger, brighter educators coming into the field who are willing to develop this area of expertise.

The challenge of learning about AI and its implications are one step too far for me too. AI is a complete game changer and if I am not willing to devote a lot of time to learning more about it, I don't think I can be relevant in the field anymore. So I'm unsure about the future of this blog which has been a part of my life for so long. I'll wait and see if I find new inspiration in the coming months and if not I can round it off with a review of what I have learned from the process.

Monday, February 6, 2023

Artificial intelligence - instant gratification but what do we learn?

Photo by Joakim Honkasalo on Unsplash

Artificial intelligence (AI) has become the default centre of attention in education this year with enthusiasts telling us to accept and even welcome it into our teaching and learning whilst sceptics are busy looking for tools that can detect AI-generated texts, videos and images in order to combat the expected wave of cheating. The tech giants are already on the case with Microsoft planning to embed ChatGPT and now Google has announced a launch of their version of the tool. There's big money to be made out there and lots of data to be harvested and distilled. 

Cheating in exams is probably the least of our worries. This eternal battle reminds me of the wonderful Spy versus spy cartoons in Mad magazine where two spies, identical except for one being dressed in white, the other in black, engage in a never-ending tit-for-tat battle using all sorts of secret weapons. Every new secret weapon prompts an even better anti secret-weapon weapon in a parody of the cold war antics of the USA and the USSR. In recent years we've had waves of plagiarism detection tools countered with essay mills where you can buy off-the-shelf essays or pay someong else to write it all for you. Interestingly the biggest vendor of plagiarism detection software Turnitin has announced its own AI-detection software. And so it goes on. It's time to break this war of attrition by changing to other forms of assessment based on personal reflection, interviews and projects. Many teachers have already made this transition. 

A more balanced response to AI in education appears in an article on SlateYou’re Not Going to Like How Colleges Respond to ChatGPT. The authors see the spy versus spy scenario as one orchestrated by the tech companies so that educators will feel forced to invest in AI detection software (you can bet your life that this will need to be updated regularly and create a never-ending income stream).

Whenever fears of technology-aided plagiarism appear in schools and universities, it’s a safe bet that technology-aided plagiarism detection will be pitched as a solution. Almost concurrent with the wave of articles on the chatbot was a slew of articles touting solutions. A Princeton student spent a chunk of his winter break creating GPTZero, an app he claims can detect whether a given piece of writing was done by a human or ChatGPT. Plagiarism-detection leviathan Turnitin is touting its own “A.I.” solutions to confront the burgeoning issue. Even instructors across the country are reportedly catching students submitting essays written by the chatbot. OpenAI itself, in a moment of selling us all both the affliction and the cure, has proposed plagiarism detection or even some form of watermark to notify people of when the tech has been used. Unfortunately, the tool released is, according to the company, “not fully reliable.”

Once again it's a case of whether we should develop new technologies just because we can and then let the world deal with the consequences. Who benefits? Certainly not educators or students but then again nobody asked us.

However, one thing we can be sure of is this: OpenAI is not thinking about educators very much. It has decided to “disrupt” and walk away, with no afterthought about what schools should do with the program.

The texts produced by AI are often impressive - articles with references, instant summaries, creative writing, poetry, programming - but the shortcomings are becoming clearer as people experiment more deeply. Basically it reformulates what it finds on the sources it trawls, including some that would not be considered reliable, and sometimes it simply makes a guess at an answer, as Maha Bali describes in How *Not* To Be Overly Impressed with #ChatGPT. These flaws make it untrustworthy at present but I suspect it will improve very rapidly.

Yes, it's impressive to get an instant blog post or essay but what do you learn from that? Isn't learning all about doing this ourselves: researching other sources, working out connections, following a train of thought and putting it all together in a coherant text? The instant answer teaches you nothing. There are no magic shortcuts to learning as we should have realised by now after so many commercially driven hype cycles around things like smartboards, iPads, MOOCs, virtual reality and so on. The learning process is complex and takes place in your head, irrespective of the gadgets you have available. The Slate article continues:

To outsource idea generation to an A.I. machine is to miss the constant revision that reflection causes in our thinking. Not to mention that the biggest difference between a calculator and ChatGPT is that a calculator doesn’t have to check its answer against the loud chaos of everything toxic and hateful that has ever been posted on the internet.

AI will soon be able to write fact and fiction, compose music, produce art works, write programs, design clothes, automatically translate from one language to another and much more. When all this has been automated what is left for us to do apart from endless consumption? We need to learn how to use AI for our benefit but focus more on our own creative energy and the value of learning for our own development. We must not simply accept technology just because it's there. 

It’s a failure of imagination to think that we must learn to live with an A.I. writing tool just because it was built.
AI is developing fast and I'm struggling to make some kind of sense of it and how it affects education. Please view this post as muddled work in progress.

Wednesday, January 11, 2023

AI-driven voice simulation - do we really want to go there?

Photo by Aditya Saxena on Unsplash
The old saying that curiosity killed the cat seems to apply equally well to us. Even when we see the dangerous potential of new technology we just keep on developing it. We continued developing nuclear weapons even when we saw the devastation they caused and maybe our curiosity about artificial intelligence will lead us to new distasters. As I wrote in the last post we can't resist opening Pandora's box.

In the wake of the panic caused by ChatGPT (an excellent overview of what we know so far is in a post by Mark Brown, Ten facts about ChatGPT). I found an article in Ars technicaMicrosoft’s new AI can simulate anyone’s voice with 3 seconds of audio. Microsoft have seemingly developed an AI text-to-speech model called VALL-E that can simulate a voice based on a short recording. Presumably the more input it has the better it can simulate the voice. You can then let it read any text you wish in the voice of that person, thus enabling you to create fake statements. Even if you can certainly find beneficial uses for this, the potential for and consequences of misuse are terrifying.
Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.
At first the fakes will be detectable but the whole point of AI is that it will improve. Combining this with tools for text, photo and video generation and the potential for governments, corporations, political parties, extremists and conspiracy theorists is enormous. Just because we can develop this technology doesn't mean that we should, to paraphrase the famous quote from Jurassic Park. Do we really want to open this box? Can't we just step back?

Microsoft try to sound reassuring in the article but I don't think we are capable of following any principles, no matter how well intentioned.
"Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models."'

So what happens when AI becomes increasingly smarter and we can no longer trust what we read, hear or see? In case you wondered, I actually wrote this myself. 

Thursday, January 5, 2023

Artificial intelligence - opening Pandora's box

It has been a while since I last wrote here due to life events taking precedence. Meanwhile the world of educational technology has been buzzing about the advent of the AI application ChatGPT. This is a more user-friendly upgrade of GPT that I wrote about back in October, The end of the essay? Ask ChatGPT a question and you'll get a very plausible and often accurate answer, in most languages. There are lots of interesting reviews out there and we're all trying to digest the potential consequences this will have for education. For example have a look at a post by Tony BatesPlaying with ChatGPT: now I’m scared (a little), where he analyses the tool's answers to questions in his own field of expertise. 

Of course the main topic for the media has been how students can use this to cheat without the risk of running foul of anti-plagiarism tools. Why do we always assume the worst in our students? I've seen calls to return to the exam hall with paper and pencil examinations (though some institutions have never left this scenario) and speculation on tools that can detect AI-generated text (it takes a thief to catch a thief I suppose). But these are futile attempts to stop the tide and it is vital that educators and decision makers keep up with the development of AI and find strategies to use it wisely. There are several good articles about how to use ChatGPT in your teaching, for example Ryan Watkins'Update Your Course Syllabus for chatGPT. Tips include asking students to fact check and improve on the tool's answers to questions and using other media for assignments such as mindmaps, podcasts and video at least until AI can generate even those!).

There are of course limitations with ChatGPT. According to a presentation by Torrey Trust, ChatGPT and education, the tool is not actually connected to the internet and cannot access texts and information from later than 2021. I assume that this is a temporary inconvenience and we can count on more advanced versions and competitors very soon. In addition the tool collects data from all its users that can then be shared with third parties (surprise surprise!). So before using this with students you'll bneed to discuss the privacy implications, as well as the implications for their own learning.

Just because the tool produces plausible and accurate answers to a lot of questions it can also pick up biases from texts it analyses and in some cases can provide false or misleading information, even backed up by non-existent sources, as shown in an article by Lucas Ropek in Gizmodo, How to Talk to ChatGPT, the Uncanny New AI-Fueled Chatbot That Makes a Lot of Stuff Up.

However, there is a bizarre exception to the program’s automated smoothness. The part where things get slightly uncanny (and, in this writer’s opinion, more creepy) is when the chatbot makes mistakes. The reason this is so weird is that instead of just short-circuiting or spitting out an automated response like “Error” or “Please Try Again” when it doesn’t know something, ChatGPT actually makes something up. Often this thing is a rough approximation of the correct answer. This is weird because it’s also what humans tend to do, conversationally. Unsure of how to answer a particular question or address a topic, most people try to bullshit their way out of it. Well, so does ChatGPT.

In some cases it will admit its limitations, for example when asked to make predictions, summarise texts or make personal reflections, but many writers have found it worryingly good at providing flawed answers. The danger is that we trust AI too much and assume it knows everything. Using it in say health care, social work and employment would seem to be asking for trouble.

Then we have the thorny issue of copyright. Who owns the texts produced by AI and how can they be cited? Does the person who wrote the prompt have copyright, or the programmers who devised the tool, or the owner of the company? Jonathan Binns (The impact of AI on copyright law) asked ChatGPT what the impact of AI would be on UK copyright law and got this answer:
It is difficult to predict the exact impact that AI will have on copyright law in the UK. However, it is likely that the use of AI in creative industries, such as music and art, will raise questions about the ownership of works created by AI. There may also be discussions about whether AI itself can hold copyrights for works it creates. It is possible that the UK government will need to update copyright laws to address these issues.
AI is also making a big impact on creative industries like music. ChatGPT and other tools can write song lyrics to order in seconds in the style of any famous artist. Music generator tools are already producing songs, jingles and soundtracks and although they are hardly masterpieces they certainly threaten a lot of people's livelihoods, as described in an article in Billboard, What Happens to Songwriters When AI Can Generate Music? and in a video by Rick Beato, How Auto-Tune DESTROYED Popular Music. The point in Beato's video is that since so much popular music over the last 20 years has been increasingly computer-generated the jump to AI may not be noticed.

All this raises so many questions about the future of education and work and we are only glimpsing the start of AI's development. We have opened Pandora's box and, as in the myth, it will be impossible to close it again. There are many exciting uses but the opportunities for manipulating between fact and fiction are clearly enormous to the point that it may soon be impossible to distinguish between them. What happens when we can't trust anything? I am not confident that we are able to handle the genie we have created.