Sunday, May 29, 2022

Platform literacy needed to avoid increasing inequalities in education

James Elder Christie: The Pied Piper of Hamelin Public domain (CC0) on Wikimedia Commons

A major argument in the educational technology debate over the years has been to facilitate widened access to education through digital platforms and applications. This has indeed been achieved to a large extent with the widespread adoption of open access to scientific articles, open educational resources published by many institutions around the world, global collaboration projects such as Wikipedia as well as the use of learning management systems, video conferencing platforms and networking tools. The options for studying from home, even in sparsely-populated regions have increased dramatically and it would be easy to claim that the mission of widened access to education has been achieved.

However, as the power and sophistication of educational technology has increased so we see new inequalities and biases appearing and this is the topic of an excellent article by Laura CzerniewiczMulti-layered digital inequalities in HEIs: the paradox of the post-digital society. Although millions of students and teachers were able to adapt successfully to online education during the pandemic there were many who suffered. Many students had poor internet access or could not afford to pay for access. Many had to study with only a mobile or sharing one device with the rest of the family and in many countries reliable electricity is still a dream. The rollout of modern edtech assumes that everyone has the right devices, internet access and reliable electricity and thus the digital divide is further accentuated.

The digital divide is alive and well; indeed the digital paradox is that even as the basics of the divide are addressed through access, more complex layers of exclusion are added; digital inequalities thus morph into new complicated forms. Nevertheless, fair and equitable technological infrastructure is the foundation of inclusion in HE: electricity, devices, ubiquitous connectivity and cheap data. These are essential but insufficient.

The pandemic has also forced educational institutions to invest in new technology and the urgency has meant that many have overlooked the serious implications for user privacy associated with much of today's IT industry. The growth of surveillance capitalism, as defined by Shoshana Zuboff, is particularly worrying with the big five tech giants (Amazon, Google, Microsoft, Apple, Facebook) controlling and storing so much of our daily lives.

The intensive digitalisation catalysed by the pandemic and concomitant “online pivot” means that HE is in danger of fast becoming a site of surveillance capitalism, with the concomitant dangers for equity, little transparency and unequal terms of engagement.

With such an upsurge in educational technology many institutions have signed major contracts with companies without questioning what those companies will do with all the student data they gather. We all happily accept the terms and conditions, cookie settings and other privacy settings without really understanding the implications and no matter how digitally literate we may be those settings are not designed to be understood. The hidden biases of algorithms can add to further exclusion of some students groups especially when combined with the rise of artificial intelligence in tracking student progress, identifying weaknesses and detecting cheating.

The amalgamation of the digital into higher education, through the dominant extractive economy, introduces complex and often invisible power dynamics into public higher education. The terms of engagement are imbalanced, hidden behind dense language and easy promises. There are especially profound implications for those with barriers to participation at individual and institutional levels. This has introduced several new inequities into the student experience and the sector.

Czerniewicz presents some strategies to prevent this rather dystopic development. Firstly, institutions need to develop better platform literacy and be more able to demand safeguards from technology providers. We need to decide whether the risks of buying a platform outweigh the advantages - no matter how cool it may be. Many technology decisions are governed by FOMO (fear of missing out) and this tendency must be changed. She advocates iterating towards equality, developing strategies to identify inequalities and minimising risks as far as possible, i.e. Developing equitable ethical data policies and frameworks.

These themes are further discussed in another recent article, Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled people. This looks at how technology is used in education, health care, workplace and criminal law to further disadvantage disabled and minority groups in society. Here too we see examples of societal biases being magnified by algorithms. 
In recent years, schools have adopted technologically-augmented surveillance tools, some of which rely on predictive analysis and automated responses to flag suspicious behavior, trigger investigative or disciplinary action, or prioritize crisis response resources. The surveillance tools can have a disproportionate impact on disabled students and students of color - and likely on disabled students of color in particular.
Although the European General Data Protection Regulation (GDPR) has increased our awareness of some of these dangers there is still a need for a deeper understanding of the consequences of the IT systems we use. If we are not careful we risk following the pied piper into the cave, dancing merrily to his tune. Sometimes we must dare to say no to the lure of technology.

Update
Further coverage on how some tech companies exploit student data, in this case children, can be read in a new post by Tony Bates, The perversion of the Internet: scraping and selling children’s data from ed tech tools.

Thursday, May 19, 2022

AI-generated essays - time to rethink written assignments


Students will employ AI to write assignments. Teachers will use AI to assess them. Nobody learns, nobody gains. If ever there were a time to rethink assessment, it’s now.

This is a quote from an article by Mike Sharples, London School of Economics, New AI tools that can write student essays require educators to rethink teaching and assessment. There are now tools that use artificial intelligence (AI) to generate highly plausible academic essays, complete with references. The article takes an example of a short essay about the problems around the popular concept of learning styles generated by a Transformer AI program, GPT-3. The user simply entered the first sentence and AI completed the essay. The result is not particularly insightful but good enough to pass and it won't show up in any plagiarism control since the text is completely original. If the essay is fed back into the tool by the teacher it can write a similarly plausible comment on the essay. The whole assignment can therefore be performed by AI, prompting the quote above.

Even if some of the essays generated this way may still have weaknesses (the example in the article has false citations) the whole point of AI is that it is constantly learning and improving. The phenomenon is not new, it has been possible for many years to pay someone else to write your essays for you via essay mills, but now the human element has finally been removed. Does this mean the end of the written assignment as an examination form? 

The author suggests a few ways of using the AI transformer in a constructive way, for example by getting students to generate AI texts and then find faults in them and improve them. This reminds me of how some teachers tackled plagiarism by writing a sample essay/article that included several form of plagiarism as well as poor citation practice and asked the students to find the problems and correct them. 

But the main point here is that we need to move on to new ways of assessing students and avoid examination methods that ask questions that can be automatically generated or copied from the internet. 
Finally, as educators, if we are setting students assignments that can be answered by AI Transformers, are we really helping students learn? There are many better ways to assess for learning: constructive feedback, peer assessment, teachback. If Transformer AI systems have a lasting influence on education, maybe that will come from educators and policy makers having to rethink how to assess students, away from setting assignments that machines can answer, towards assessment for learning.
Live assessment activities like interviews, presentations, debates or round table discussions can be run either on-site or online and are almost impossible to cheat in. But that brings us to the eternal question of how to move the focus in education from extrinsic motivation (exams, credentials) to intrinsic motivation (satisfaction, self-confidence, pride). Focus on competition, rewards and results encourages cheating among some, whereas activities that focus on community, learning for pleasure and intangible rewards are generally free of cheating. If learning is in the forefront there is simply no point in cheating. Hopefully AI-generated essays will remain a mere curiosity.

Saturday, May 7, 2022

Non-commercial social networking - a safe haven in a world of insecurity and surveillance?

With so many of our social media tools controlled by multi-billionaires and thriving on our personal data, it's no surprise to find that more and more people are opting out. In the wake of Elon Musk's takeover of Twitter there has been a significant rise in subscriptions to Mastodon, the non-profit open source alternative. This week, the European Union's European Data Protection Supervisor announced the pilot testing of two new platforms, EU Voice and EU Video, that are open source and meet the data privacy conditions set down by GDPR (General Data Protection Regulation) and the Schrems II ruling (EDPS launches pilot phase of two social media platforms). They are based on existing non-commercial social platforms, PeerTube and Mastodon, and are now being pilot tested by a number of EU agencies. Wojciech Wiewiórowski, EDPS, makes the case for this initiative:

With the pilot launch of EU Voice and EU Video, we aim to offer alternative social media platforms that prioritise individuals and their rights to privacy and data protection. In concrete terms this means, for example, that EU Voice and EU Video do not rely on transfers of personal data to countries outside the European Union and the European Economic Area; there are no advertisements on the platforms; and there is no profiling of individuals that may use the platforms. These measures, amongst others, give individuals the choice on and control over how their personal data is used.

I hope this initiative succeeds. I love social media but find the present set-up increasingly distasteful and long for the opportunity to enjoy the benefits of networking without commercial exploitation. 

In addition there is now a growing number of open source social platforms that are interconnected under the banner of Fediverse. This means that if you belong to one platform you can interact with members of other platforms seamlessly even if you are not a member of the other platforms. Similar to using e-mail or good old telephony; you can contact anyone thanks to common standards. The video below explains the concept quite nicely.


Here's a film about PeerTube, offering a safe alternative to YouTube and Vimeo for posting video material.


Of course these communities are dwarfed by the commercial giants but maybe, just maybe, the tide could be turning as more people realise that you can be social without accepting tracking, profiling and exploitation. The idea of interconnected communities is so obvious that I wonder how we have been duped into accepting the commercial walled gardens as normal. I wish I could say that I have moved my digital activities over to Fediverse, but the fact is that I have so many valuable contacts, groups and communities that are only on Facebook, Twitter etc and don't want to lose them. My contacts on Mastodon number only a handful compared to Twitter. I also wonder if people really want security and protection and that the attraction of the commercial platforms is that absolutely everything and everyone is there, from the sublime to the ridiculous and beyond. Don't underestimate the thrill and fascination generated by all the insane  photos and film clips available on YouTube, Facebook, Instagram and TikTok, even the vitriolic comments that follow. Maybe it's the crazy stuff that keeps us scrolling. Will we be equally fascinated by a nicely organised and clean version, no matter how secure and respectful it may be?

 Of course I welcome an alternative to the insanity and surveillance but although it may only become a niche phenomenon, it's a vital safe haven for those of us who need to find a safer and more respectful part of the internet to live in. I don't see the giants being felled any time soon by the power of open source, but's what's important is showing that there are other ways of organising how we communicate on the internet and that big tech is not the only solution. 

Sunday, May 1, 2022

Déja vu - the return of virtual worlds

The battle for the metaverse is on with most of the media buzz focused on Meta (a k a Facebook) and their plans to engage us all in their virtual market and meeting place. Already there are companies setting up virtual hubs and universities are also creating their virtual reality campuses which they hope will soon be full of student avatars. Reading about this in an article in Campus Technology10 Institutions Opening 'Metaversity' Campuses, took me back about 14 years to the heady days of Second Life and I wonder what we have learnt since then. These virtual campuses are built using EngageVR where you can design your own spaces and make them more or less accessible to visitors and residents. The best way to interact in the metaverse is through VR visors but you can also interact using your computer, though not so immersively. Students and teachers can then meet as avatars in the virtual campus and interact with objects, graphics, diagrams etc with voice communication and the ability to explore different environments together.

The new metaversity at Stanford is described like this:

The metaversity courses will be synchronous, and students can attend whether they are learning on campus or remotely. Each student will receive a Meta Quest 2 virtual reality headset for use during their course (courses can also be accessed via PC). Within the virtual learning environment, they will be able to engage with other students and their instructor in VR experiences such as delving into human anatomy, taking a time machine through history or studying astronomy on a spaceship.

Here's a video showing Stanford's virtual learning space using the EngageVR platform.


The screenshots on Engage's site seem almost indistinguishable from the views of business centres and campuses on Second Life all those years ago. Apart from the VR headsets and better graphics I wonder what's new. Most academic experiments in Second Life simply recreated virtual versions of reality with copies of their campus environment complete with lecture halls where avatars could show PowerPoint slides to an audience of student avatars seated in rows. The new VR spaces seem to be doing the same thing. In a virtual world where the laws of physics don't apply and you are free to create any space you can imagine, why do we keep building classrooms and auditoriums? Why is it so difficult to escape the stereotypes?

I really enjoyed exploring Second Life back then in the tech-optimist days of 2007. It was probably ahead of its time but it demanded high-end graphics for a smooth experience, so many users gave up when they realised that their devices simply couldn't cope. Second Life is still going and has a devoted band of users who have obviously enjoyed the experience. What was really interesting about it was you could develop your own fantastic spaces and choose whether they were open to all or restricted access. You could build realistic copies of well-known places or complete fantasy worlds. You could choose alternative identities and take part in very convincing simulations that would not be possible anywhere else. A whole virtual economy was developed with people willing to pay real money for virtual real estate. There was a vibrant cultural life with concerts, art exhibitions, sculptures and fashion shows. I enjoyed exploring new areas of the vast virtual world that was built up and did actually have useful encounters with people I would never have met otherwise. The openness gave enormous opportunities but sadly also threats as the trolls and spammers found their way into the new world. The new VR spaces are much more restricted and controlled but maybe the freedom and creativity of Second Life was its greatest attraction.

If VR is going to be relevant to education it has to contribute something new and offer possibilities that are hard to offer in any other environment. That means more than just looking at 3D models of cars or DNA strings that the publicity films tend to show. There are great examples of simulation applications in areas such as medicine and the opportunity to offer realistic roleplay situations for professional development. However, the need for visors and expensive software means it will probably be accessible only for wealthy institutions and for specific niches where VR can really prove its value. Focus on the really innovative and pedagogically valid uses of VR and skip the reproductions of campus buildings and auditoriums.