Sunday, February 18, 2024

Seeing is not believing

 

Following on from my last post on the web becoming a digital landfill site, here's more reason for concern. The limited release of Sora, a new AI generated text-to-video application, shows how fast this technology is developing and the terrifying potential it offers for disinformation Watch the video above where Marques Brownlee presents and discusses the demo videos released by OpenAI Sora and compares them to the hilariously inept AI-generated videos of just one year ago. He points out that there are still tell-tale signs of AI in the videos but in many cases you need an experienced eye to spot them. Most people, however, will not even suspect that the films are not real and if we consider the astounding improvements that AI-generated applications have made in the last year we can expect near perfection in the coming year or so.

AI-generated content is of course completely based on existing, copyrighted content but at the same time makes copyright legislation irrelevant. Why use human models in the fashion industry when you can generate totally realistic digital versions? Why pay people or companies for photos, music, graphic design, advertising copy or whatever when you can generate it yourself in seconds for free? I already see lots of ridiculous AI images in my social media feeds and at the moment they're extremely obvious but what happens when I can't tell the difference anymore? No amount of digital literacy is going to help unless you're prepared to analyze the content in depth. We are fast approaching a time when you simply can't believe what you see, hear or read. We could regulate the use of AI and have strict guidelines but that would mean governments taking responsibility, standing up to big business and cooperating globally. Can you seriously believe such a development given the nature of today's governments and power structures? I certainly can't but I would love to be proved wrong.


Monday, January 29, 2024

Will AI turn the web into an information landfill site?

Photo by Shardar Tarikul Islam on Unsplash

Until now we have believed that most of what we see on the web is reliable information and we have developed digitalliteracy skills to fact-check and identify trustworthy sources. With the rapid deployment and advancement of AI, however, I wonder if we are soon reaching a frightening tipping point where it will become impossible to tell fact from fiction and where the lies and disinformation drown out the truth. "Truth is behind a paywall but the lies are free" is a valid comment on today's media landscape and I am afraid that this will get even more pronounced as AI-generated content floods the net.

AI tools can write increasingly plausible news stories, reviews, articles and summaries complete with references and it is easy to be impressed by it all. Often the content is good but there are many cases of so-called hallucinations where the application simply invents things and passes them off as fact. Without considerable knowledge in the field it is very hard not to believe what you read. There are plenty of people using AI to spread propaganda and disinformation with news channels, blogs and sites full of AI-generated content. These are of course free to access unlike serious news media who rely on subscriptions to survive. As more AI-generated content fills the web the new AI-applications will of course trawl the freely available content on which to base their new production. Could this lead to a web that looks like a gigantic landfill site, full of toxic waste?.

A recent example of the wild imagination of AI-applications was in an article in the Swedish daily newspaper Dagens Nyheter (in Swedish but here's the link). The writer asked an AI-generator, Bard, to describe the careers of the children of the famous Swedish artist/designer couple Carl and Karin Larsson. The answer was well written and detailed but completely wrong. The oldest son actually died at the age of 18 but according to Bard he had a long and successful career as an architect. The careers of the other children were also fabricated. The writer checked the facts from other sources but how many of us would simply accept the AI-generated answer as the truth? What happens when search engines offer us links to AI-hallucinations in the first 20 search results?

A post by Ian Betteridge, The information grey goo, raises the alarm on this threat. He states that anywhere content can be created will ultimately be flooded with AI-generated words and pictures. New AI applications will feed off the old AI content and the mix becomes increasingly inaccurate, resulting in what he describes as AI Grey Goo, a swamp of rubbish:

This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created. Where the volume of content created overwhelms human or algorithmic abilities to sift through it quickly and find high-quality stuff.

Traditional digital literacy skills are not enough to deal with a disinformation overload. We risk a situation where nothing on the web can be trusted. Services like customer reviews, so important to retailers, restaurants and the tourist industry will beome trashed since the bots will be doing all the reviews.

It will be possible to create a programme which says “Find all my products on Amazon. Where the product rating drops below 5, add unique AI-generated reviews until the rating reaches 5 again. Continue monitoring this and adding reviews.”
If we can no longer trust any text, photo or film what on earth can we believe? The trustworthy sources are increasingly forced to charge for access since good journalism costs money to produce and so only the already converted will be able to access fact-checked and scientific content.
With reliable information locked behind paywalls, anyone unwilling or unable to pay will be faced with picking through a rubbish heap of disinformation, scams, and low-quality nonsense.

I know that AI can and will be used to further research and to benefit science, but the negative consequences, in my opinion, far outweigh the positive. We risk the prospect of quality content being hidden behind paywalls whilst the "free" web will be an information landfill. But, like Pandora's box, it's probably too late to close the lid.