Summary:
- Generative AI will make it easier to produce disinformation that is tailored to specific audiences.
- This means that disinformation campaigns will be more targeted and effective.
- AI-powered disinformation could be used to target individuals with content that is specifically designed to manipulate them.
- The Biden administration has taken some steps to address the threat of AI-powered disinformation, but it is unclear whether these steps will be enough.
- It is important to be aware of the potential threats posed by AI-powered disinformation and to be critical of the content we see online.
Quotes:
“If I want to launch a disinformation campaign, I can fail 99 percent of the time. You fail all the time, but it doesn’t matter. Every once in a while, the QAnon gets through. Most of your campaigns can fail, but the ones that don’t can wreak havoc.”
“This is the classic story of the last 20 years: Unleash technology, invade everybody’s privacy, wreak havoc, become trillion-dollar-valuation companies, and then say, ‘Well, yeah, some bad stuff happened.’ We’re sort of repeating the same mistakes, but now it’s supercharged because we’re releasing this stuff on the back of mobile devices, social media, and a mess that already exists.”
Well, I guess we’ll have to follow Norway’s steps if we want to get somewhere. But yeah, I highly doubt anything beyond asking big tech to behave will be done