Monday, January 29, 2024

Will AI turn the web into an information landfill site?

Photo by Shardar Tarikul Islam on Unsplash

Until now we have believed that most of what we see on the web is reliable information and we have developed digitalliteracy skills to fact-check and identify trustworthy sources. With the rapid deployment and advancement of AI, however, I wonder if we are soon reaching a frightening tipping point where it will become impossible to tell fact from fiction and where the lies and disinformation drown out the truth. "Truth is behind a paywall but the lies are free" is a valid comment on today's media landscape and I am afraid that this will get even more pronounced as AI-generated content floods the net.

AI tools can write increasingly plausible news stories, reviews, articles and summaries complete with references and it is easy to be impressed by it all. Often the content is good but there are many cases of so-called hallucinations where the application simply invents things and passes them off as fact. Without considerable knowledge in the field it is very hard not to believe what you read. There are plenty of people using AI to spread propaganda and disinformation with news channels, blogs and sites full of AI-generated content. These are of course free to access unlike serious news media who rely on subscriptions to survive. As more AI-generated content fills the web the new AI-applications will of course trawl the freely available content on which to base their new production. Could this lead to a web that looks like a gigantic landfill site, full of toxic waste?.

A recent example of the wild imagination of AI-applications was in an article in the Swedish daily newspaper Dagens Nyheter (in Swedish but here's the link). The writer asked an AI-generator, Bard, to describe the careers of the children of the famous Swedish artist/designer couple Carl and Karin Larsson. The answer was well written and detailed but completely wrong. The oldest son actually died at the age of 18 but according to Bard he had a long and successful career as an architect. The careers of the other children were also fabricated. The writer checked the facts from other sources but how many of us would simply accept the AI-generated answer as the truth? What happens when search engines offer us links to AI-hallucinations in the first 20 search results?

A post by Ian Betteridge, The information grey goo, raises the alarm on this threat. He states that anywhere content can be created will ultimately be flooded with AI-generated words and pictures. New AI applications will feed off the old AI content and the mix becomes increasingly inaccurate, resulting in what he describes as AI Grey Goo, a swamp of rubbish:

This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created. Where the volume of content created overwhelms human or algorithmic abilities to sift through it quickly and find high-quality stuff.

Traditional digital literacy skills are not enough to deal with a disinformation overload. We risk a situation where nothing on the web can be trusted. Services like customer reviews, so important to retailers, restaurants and the tourist industry will beome trashed since the bots will be doing all the reviews.

It will be possible to create a programme which says “Find all my products on Amazon. Where the product rating drops below 5, add unique AI-generated reviews until the rating reaches 5 again. Continue monitoring this and adding reviews.”
If we can no longer trust any text, photo or film what on earth can we believe? The trustworthy sources are increasingly forced to charge for access since good journalism costs money to produce and so only the already converted will be able to access fact-checked and scientific content.
With reliable information locked behind paywalls, anyone unwilling or unable to pay will be faced with picking through a rubbish heap of disinformation, scams, and low-quality nonsense.

I know that AI can and will be used to further research and to benefit science, but the negative consequences, in my opinion, far outweigh the positive. We risk the prospect of quality content being hidden behind paywalls whilst the "free" web will be an information landfill. But, like Pandora's box, it's probably too late to close the lid. 

2 comments:

  1. This is an interesting article. It raises legitimate concern. I do think that persons have the wrong idea of how an AI tool should be used. These tools are supposed to be supportive of human activity offering a means of improving productivity. We are far from the stage where AI will total replace human activity. This is evident even in the case studies explored by the linked articles. It is the responsibility of content creative and publishers to fact-check their content before it is publish to the public. Their responsibility has not changed. However there seems to be a pervading atmosphere of irresponsibility and laziness where fact-checking is concerned as persons creating content focus more on the rush to profit than the quality of the product that they are putting out there. Misinformation was a problem before the increased usage of AI tools. However it would seem that this increase usage may result in increased misinformation. This is indeed a problem but possibly solutions exists. The issue is that most humans will not fact-check what they read online and will accept it as gospel as seen many times with misinformation.

    ReplyDelete
  2. Thanks for your comment. I agree that there are solutions but they depend on international cooperation and some kind of governance that promotes responsible use and regulates the spread of disinformation. Since many of our leaders, governments and corporations are responsible for spreading disinformation I can't see that happening.

    ReplyDelete