Sunday, April 7, 2024

Climate crisis - why are universities so silent?

Photo by Markus Spiske on Unsplash

We are in the midst of the greatest environmental collapse since long before human beings emerged onto this planet but the vast majority of governments and obedient citizens have decided that the best strategy is to pretend that there is no threat and continue with business as usual, focusing on short-term profit above future survival. Science denial is now becoming mainstream and the United Nations' reports coming from the world's experts on climate and environmental science are flippantly dismissed as alarmist and governments pay at best lip service with greenwashed statments of fantasy and wishful thinking. Climate activists are ridiculed and often branded as extremists and a "threat to democracy" whilst the governments and corporations that are profiting from actively destroying the world we live in are portrayed as rational and responsible. 

Since the science behind these reports come from the world's universities and research institutes these organisations are worryingly silent in national debates. Instead of fully and vocally standing up for their own researchers and offering a united front against the deluge of disinformation and greenwashing, they seldom if ever make their voices heard. There are some positive signs, such as the EUA's (European Universities Association) A Green deal roadmap for European universities published in October last year..

EUA’s Green Deal roadmap outlines processes and interventions which can boost universities’ impact and visibility in pursuing a climate-neutral, environmentally sustainable, and socially equitable Europe. The roadmap should serve as an inspiration and template for how universities can face the climate and environmental challenge over an extensive timeframe, enabling them to make both an effective contribution and serve as exemplars of sustainable communities.

Certainly almost every university has some kind of environmental/sustainability policy but on the ground there are often only cosmetic changes. Despite the lessons learned during the pandemic about the affordances of digital meetings and conferences we have largely flocked back to on-site conferences and academics, like other professions, are flying far and wide again. This despite all relevant research pointing to the fact that business as usual will have catastrophic societal consequences in the next 20-30 years. If that wasn't bad enough we have the terrifying trend that just when we need global cooperation to solve global crises the world is filled with toxic xenophobic nationalism leading to more conflict and destruction.

These thoughts were given greater perspective last week in an article in Frontiers in Education, “No research on a dead planet”: preserving the socio-ecological conditions for academia, by Aaron Thierry, Laura Horn, Pauline von Hellermann and Charlie J. Gardner. Universities' passivity in this issue will threaten their future existence.

Despite thousands of higher education institutions (HEIs) having issued Climate Emergency declarations, most academics continue to operate according to ‘business-as-usual’. However, such passivity increases the risk of climate impacts so severe as to threaten the persistence of organized society, and thus HEIs themselves.

Universities are of course simply following the societal indifference to our impending crisis, often with a warped pride in being as climate unfriendly as possible, refusing to even contemplate flying and driving less often, eating less meat, consuming less and living more sustainably. Our governments confirm these prejudices as here in Sweden where we are being encouraged to fly and drive more with increasing subsidies to fossil industries. Climate change and environmental protection are guaranteed conversation killers - I have tried! But universities should at least support their own scientists and offer a research-based response to the disinformation.

This dissonance extends to the individual behavior of many academics. For example, the normalization of aviation-based hyper-mobility in academic work (Bjørkdahl and Franco Duharte, 2022). It is even the case that professors in climate science fly more than other researchers, despite the tremendous carbon emissions associated with such activities (Whitmarsh et al., 2020). On a day-to-day basis, most academic staff seem to be maintaining the semblance of normalcy and unconcern. So great is our apparent collective indifference that an onlooker could be forgiven for thinking that we do not believe our own institutions’ official warnings that an emergency is unfolding around us.

It's time to speak out, but not as individuals (vulnerable to the hate and threats that speaking out provokes) but rather as institutions or even alliances of institutions. Universities still carry a lot of weight in society even if it is being eroded by authoritarian politicians. If they do not speak loudly and clearly in support of their own science then they risk becoming irrelevant. It is that serious. I will give the last word here to the authors of the article.

For too long we have allowed a culture of climate silence to dominate in our universities, leading to a misalignment of our priorities from our core purpose and values, thereby perpetuating a maladaptive response to the unfolding planetary emergency and undermining the very future of the higher education sector. Universities have in effect become ‘fraud bubbles’ (Weintrobe, 2021) in which staff and students must construct a ‘double reality’, in order to pursue a narrow social role, trapped in maladaptive incentive structures of increasingly neoliberal institutions. This ultimately serves to reproduce the hegemonic practices, norms and conventions driving socio-ecological collapse. As an academic community we must urgently learn to grapple with the role that universities can play as leaders in the necessary social transformation to come. Our dearest notions of progress, rooted in our desire for the beneficial accumulation and application of knowledge (Collini, 2012), are now both directly and indirectly threatened by the climate crisis.

Maybe I have missed good examples of university action. Please send examples in that case. Happy for all signs of resistance!

Saturday, March 23, 2024

Mass personalisation - when every cafe looks the same

Photo by shawnanggg on Unsplash

Remember when you first used Amazon and they offered recommendations of other books you might like based on what you had bought or browsed on the platform? Often the recommendations were very much in line with your tastes and you discovered new books or music. Google searched for results that matched your browsing, giving you a personalised search service. We felt served, but today we feel used and manipulated by algorithm-driven recommendations that are now extremely commercialised. If you keep getting fed music or literature that you already like you'll never discover anything new. It's a bit like the radio stations that promise to play only hit music - that means you'll never hear anything new until after it has become a hit. The trap of personalisation becomes even more worrying when you show interest in more extreme political views and the algorithms offer you increasingly extreme material until you see very little else. Add to that the power of influencers on social media and our flock instinct and the result is a global streamlining. In the end everything looks or sounds roughly the same.

This streamlining is the subject of a fascinating podcast and article on The Verge, How to save culture from the algorithms, with Filterworld author Kyle Chayka. Kyle is the author of a book called Filterworld: How Algorithms Flattened Culture, about how global trends are created on a viral scale, via social media and streaming services.
It’s a book about how digital platforms like Instagram, TikTok, and Spotify took over our modes of cultural distribution in the past decade. Algorithmic recommendations, like TikTok’s For You feed or Netflix’s homepage, control the majority of what we see and hear online. Though they promise personalization, the net result of so many algorithms is a homogenization of culture.
Mass personalisation leads in the end to a final common denominator, as in all the trendy cafes, restaurants and BnBs looking very similar no matter where they are in the world.
I was traveling around the world quite a bit and landing in all these different cities, and I would notice that every Airbnb I stayed in had the same kind of aesthetic signature, every coffee shop I went to in Reykjavík or Kyoto or LA or Berlin all had the same stuff in it, and I just started wondering or almost being anxious about why all the sameness was happening
Products, services, trends and influencers pay for ratings in the main platforms and if you press the right buttons you can go viral. Ratings and reviews matter and these can now also be manipulated with the help of AI bots. Some music, books, films etc get pushed more than others. Our personalised services are not as personal as they seem. Our tastes are moulded by the algorithms. The question is how to escape. According to Chayka you need to reconnect with yourself and ignore the recommendations, reviews and like-counts.
Being more thoughtful is a good start. I think what I came out of it with was you want to know that you like what you like because you like it, not just because it was recommended to you and exposed to you repeatedly in a feed. Thinking about your personal taste, having a real encounter with a song or a piece of art or a piece of clothing where you don’t think about how many people liked it, where you don’t think about the Instagram account, where you just sit with your own feelings and have an experience of culture that’s in front of you that’s changing your mind or your soul or whatever, that’s truly what I want people to have. I want you to sit and stare at a painting and be like, “What does this make me feel? I don’t care how many likes it has. I don’t care how many followers the artist has. How am I feeling right now?”

I try by using for example Duckduckgo for searching since it doesn't track me. I get a more diverse set of results than with Google but that is the point. Google frequently suggested my own blog posts or articles whereas I can hardly find them with Duckduckgo! I've stopped using TripAdvisor since since reviews can be written by bots or trolls. I don't use Amazon anymore and my use of Spotify is sinking to the point where I don't think I want to subscribe anymore. We don't need to go completely offline but treat the big tech platforms with extreme care and suspicion. There are alternatives out there.

Monday, February 26, 2024

The "enshittification" of the internet - we know it's bad for us but we're hooked

Photo by ROBIN WORRALL on Unsplash

I frequently consider leaving social media completely but can't quite bring myself to do it. As I have no doubt written before I have so many contacts in there that I would lose and I still get pleasure from the groups I belong to. But you will certainly have noticed over the years how what was once a place to see photos and comments from friends has turned into a stream of adverts and posts (often political or provocative) from organisations you don't follow. At first, the ads on Facebook were often hilariously irrelevant, based merely on stereotypes. As an older male living in a village I saw ads for chainsaws, tractors, hair restorer, Viagra (of course), crypto nonsense and the fact that hundreds of fascinating women are waiting to meet me. They're getting better at finding things I am at least vaguely interested in but the problem is that the platform has become just a random stream of stuff that I never asked for. Basically it has become enshittified.

Enshittification is a concept launched by Cory Doctorow last year in a post about TikTokTiktok's enshittification. In it he argues that all platforms inevitably fill up with garbage due to the greed of the owners. Even if he writes mostly about TikTok the principle seems to apply across the board. A new post, My McLuhan lecture on enshittification, is the script of a recent lecture where he goes into more depth on the phenomenon. In short, the enshittification process goes like this:

It's a three stage process: First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
This rings true for most if not all the tech platforms: Google, Amazon, Facebook, X (Twitter), TikTok,  etc. Facebook, Instagram and Twitter were very succesful in getting us all onboard to share our everyday events and interests and we quickly got hooked. But once we were all in there and the platform changed into an ad-based channel we were trapped. If you leave the platform you leave behind all your friends. It's virtually impossible to get all your Facebook contacts to leave all at once and meet up in a new platform. So we stay there even if we begin to hate the place and despite all the scandals and blatant societal damage. Then came the trolls and disinformation channels, destroying even light-hearted discussion threads with the result that more and more became passive users or simply gave up. But still it is so hard to leave and many of us stay in there. I gave up Twitter when it became X but amazingly it is still the default channel for serious media and organisations.

Doctorow offers four factors that could combat enshittification:
There are four constraints that prevent enshittification: competition, regulation, self-help and labor.
The problem is that all of these have disappeared. We could have regulated the tech industry and broken up the monopolistic monsters like the big five. We could have created more genuine competition. We could have legislated against monopolistic takeovers, exploitative labour practices and so on but we didn't. We believed the big tech mythology of the "new economy", cool laid-back leaders, flashy offices and mottos like "Don't be evil". We are no longer customers to these companies, have you noticed how none of them offer any customer service, not even a contact number. We are simply data to exploit. Doctorow sees some glimmers of hope in a renewed interest for privacy legislation, especially in the EU, current labour action against companies like Tesla and Amazon, attempts to curb monopolistic take-overs and suchlike. But we have allowed the industry go run wild for so long it's extremely hard to constrain them now.
The capitalism of today has produced a global, digital ghost mall, filled with botshit, crapgadgets from companies with consonant-heavy brand-names, and cryptocurrency scams.
The internet isn't more important than the climate emergency, nor gender justice, racial justice, genocide, or inequality.
But the internet is the terrain we'll fight those fights on. Without a free, fair and open internet, the fight is lost before it's joined.
We can reverse the enshittification of the internet. We can halt the creeping enshittification of every digital device.
I wish we could mobilise to fight this as Doctorow suggests but first we have to get people to look up from their screens and realise that something is seriously wrong. That is the biggest challenge. 

For more on this theme please watch this interesting discussion between Camille Francois (Columbia University) and Meredith Whittaker (President, Signal) on Al Jazeera, AI and Surveillance Capitalism. They discuss the surveillance economy, the effects of AI and how we can combat it.

Sunday, February 18, 2024

Seeing is not believing


Following on from my last post on the web becoming a digital landfill site, here's more reason for concern. The limited release of Sora, a new AI generated text-to-video application, shows how fast this technology is developing and the terrifying potential it offers for disinformation Watch the video above where Marques Brownlee presents and discusses the demo videos released by OpenAI Sora and compares them to the hilariously inept AI-generated videos of just one year ago. He points out that there are still tell-tale signs of AI in the videos but in many cases you need an experienced eye to spot them. Most people, however, will not even suspect that the films are not real and if we consider the astounding improvements that AI-generated applications have made in the last year we can expect near perfection in the coming year or so.

AI-generated content is of course completely based on existing, copyrighted content but at the same time makes copyright legislation irrelevant. Why use human models in the fashion industry when you can generate totally realistic digital versions? Why pay people or companies for photos, music, graphic design, advertising copy or whatever when you can generate it yourself in seconds for free? I already see lots of ridiculous AI images in my social media feeds and at the moment they're extremely obvious but what happens when I can't tell the difference anymore? No amount of digital literacy is going to help unless you're prepared to analyze the content in depth. We are fast approaching a time when you simply can't believe what you see, hear or read. We could regulate the use of AI and have strict guidelines but that would mean governments taking responsibility, standing up to big business and cooperating globally. Can you seriously believe such a development given the nature of today's governments and power structures? I certainly can't but I would love to be proved wrong.

Monday, January 29, 2024

Will AI turn the web into an information landfill site?

Photo by Shardar Tarikul Islam on Unsplash

Until now we have believed that most of what we see on the web is reliable information and we have developed digitalliteracy skills to fact-check and identify trustworthy sources. With the rapid deployment and advancement of AI, however, I wonder if we are soon reaching a frightening tipping point where it will become impossible to tell fact from fiction and where the lies and disinformation drown out the truth. "Truth is behind a paywall but the lies are free" is a valid comment on today's media landscape and I am afraid that this will get even more pronounced as AI-generated content floods the net.

AI tools can write increasingly plausible news stories, reviews, articles and summaries complete with references and it is easy to be impressed by it all. Often the content is good but there are many cases of so-called hallucinations where the application simply invents things and passes them off as fact. Without considerable knowledge in the field it is very hard not to believe what you read. There are plenty of people using AI to spread propaganda and disinformation with news channels, blogs and sites full of AI-generated content. These are of course free to access unlike serious news media who rely on subscriptions to survive. As more AI-generated content fills the web the new AI-applications will of course trawl the freely available content on which to base their new production. Could this lead to a web that looks like a gigantic landfill site, full of toxic waste?.

A recent example of the wild imagination of AI-applications was in an article in the Swedish daily newspaper Dagens Nyheter (in Swedish but here's the link). The writer asked an AI-generator, Bard, to describe the careers of the children of the famous Swedish artist/designer couple Carl and Karin Larsson. The answer was well written and detailed but completely wrong. The oldest son actually died at the age of 18 but according to Bard he had a long and successful career as an architect. The careers of the other children were also fabricated. The writer checked the facts from other sources but how many of us would simply accept the AI-generated answer as the truth? What happens when search engines offer us links to AI-hallucinations in the first 20 search results?

A post by Ian Betteridge, The information grey goo, raises the alarm on this threat. He states that anywhere content can be created will ultimately be flooded with AI-generated words and pictures. New AI applications will feed off the old AI content and the mix becomes increasingly inaccurate, resulting in what he describes as AI Grey Goo, a swamp of rubbish:

This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created. Where the volume of content created overwhelms human or algorithmic abilities to sift through it quickly and find high-quality stuff.

Traditional digital literacy skills are not enough to deal with a disinformation overload. We risk a situation where nothing on the web can be trusted. Services like customer reviews, so important to retailers, restaurants and the tourist industry will beome trashed since the bots will be doing all the reviews.

It will be possible to create a programme which says “Find all my products on Amazon. Where the product rating drops below 5, add unique AI-generated reviews until the rating reaches 5 again. Continue monitoring this and adding reviews.”
If we can no longer trust any text, photo or film what on earth can we believe? The trustworthy sources are increasingly forced to charge for access since good journalism costs money to produce and so only the already converted will be able to access fact-checked and scientific content.
With reliable information locked behind paywalls, anyone unwilling or unable to pay will be faced with picking through a rubbish heap of disinformation, scams, and low-quality nonsense.

I know that AI can and will be used to further research and to benefit science, but the negative consequences, in my opinion, far outweigh the positive. We risk the prospect of quality content being hidden behind paywalls whilst the "free" web will be an information landfill. But, like Pandora's box, it's probably too late to close the lid.