Sunday, February 18, 2024

Seeing is not believing


Following on from my last post on the web becoming a digital landfill site, here's more reason for concern. The limited release of Sora, a new AI generated text-to-video application, shows how fast this technology is developing and the terrifying potential it offers for disinformation Watch the video above where Marques Brownlee presents and discusses the demo videos released by OpenAI Sora and compares them to the hilariously inept AI-generated videos of just one year ago. He points out that there are still tell-tale signs of AI in the videos but in many cases you need an experienced eye to spot them. Most people, however, will not even suspect that the films are not real and if we consider the astounding improvements that AI-generated applications have made in the last year we can expect near perfection in the coming year or so.

AI-generated content is of course completely based on existing, copyrighted content but at the same time makes copyright legislation irrelevant. Why use human models in the fashion industry when you can generate totally realistic digital versions? Why pay people or companies for photos, music, graphic design, advertising copy or whatever when you can generate it yourself in seconds for free? I already see lots of ridiculous AI images in my social media feeds and at the moment they're extremely obvious but what happens when I can't tell the difference anymore? No amount of digital literacy is going to help unless you're prepared to analyze the content in depth. We are fast approaching a time when you simply can't believe what you see, hear or read. We could regulate the use of AI and have strict guidelines but that would mean governments taking responsibility, standing up to big business and cooperating globally. Can you seriously believe such a development given the nature of today's governments and power structures? I certainly can't but I would love to be proved wrong.

No comments:

Post a Comment