I wonder if we are being groomed to hate it because it would be an effective tool to fight fascism with as well.
Nope. I’d still say social media/social media algorithms.
Imagine if social media didn’t exist (beyond small, tight-knit communities like forums about [topic], or BBS communities), but all these AI tools still did.
Susan creates an AI generated image of illegal immigrants punching toddlers, then puts it on her “news” blog full of other AI content generated to push an agenda.
Who would see it? How would it spread? Maybe a few people she knows. It’d be pretty localised, and she’d be quickly known locally as a crank. She’d likely run out of steam and give up with the whole endeavour.
Add social media to the mix, and all of a sudden she has tens of thousands of eyes on her, which brings more and more. People argue against it, and that entrenches the other side even more. News media sees the amount of attention it gets and they feel they have to report, and the whole thing keeps growing. Wealthy people who can benefit from the bullshit start funding it and it continues to grow still.
You don’t need AI to do this, it just makes it even easier. You do need social media to do this. The whole model simply wouldn’t work without it.
While I generally agree and consider this insightful, it behooves us to remember the (actual, 1930s) Nazis did it with newspapers, radio and rallies (… in a cave, with a box of scraps).
Half of it wouldn’t even work if the news media would do their job and filter out crap like that instead of being lazy and reporting what is going on on social media.
They also shouldn’t report on the horse race. They should report on issues.
Reporting on elections is always disappointing.
something something lamestream media!
It’s psychological warfare. By manipulating online social communities they can push the lies they want and most normies will lap it up.
I don’t believe the common refrain that AI is only a problem because of capitalism. People already disinform, make mistakes, take irresponsible shortcuts, and spam even when there is no monetary incentive to do so.
I also don’t believe that AI is “just a tool”, fundamentally neutral and void of any political predisposition. This has been discussed at length academically. But it’s also something we know well in our idiom: “When you have a hammer, everything looks like a nail.” When you have AI, genuine communication looks like raw material. And the ability to place generated output alongside the original… looks like a goal.
Culture — the ability to have a very long-term ongoing conversation that continues across many generations, about how we ought to live — is by far the defining feature of our species. It’s not only the source of our abilities, but also the source of our morality.
Despite a very long series of authors warning us, we have allowed a pocket of our society to adopt the belief that ability is morality. “The fact that we can, means we should.”
We’re witnessing the early stages of the information equivalent of Kessler Syndrome. It’s not that some bad actors who were always present will be using a new tool. It’s that any public conversation broad enough to be culturally significant will be so full of AI debris that it will be almost impossible for humans to find each other.
The worst part is that this will be (or is) largely invisible. We won’t know that we’re wasting hours of our lives reading and replying to bots, tugging on a steering wheel, trying to guide humanity’s future, not realizing the autopilot is discarding our inputs. It’s not a dead internet that worries me, but an undead internet. A shambling corpse that moves in vain, unaware of its own demise.
It’s that any public conversation broad enough to be culturally significant will be so full of AI debris that it will be almost impossible for humans to find each other.
This is the part that I question, this is certainly a fear and it totally makes sense to me but as I have argued in other threads recently I think there is a very good chance that while those in power believe that an analog of the Kessler Syndrome will happen in information, and this is precisely why they most of them are pushing this future, that it might hilariously backfire on them in a spectacular way.
I have a theory I call the “Wind-up Flashlight Theory” which is that when the internet reaches a dead state full of AI nonsense that makes it impossible to connect with humans, that rather than it being the completion of a process of censorship and oppression of thought that our imagination cannot escape from, a darkness and gloom that nobody can find their way in… the darkness simply serves to highlight the people in the room who have wind-up flashlights and are able to make their own light.
To put it another way, every step the internet takes towards being a hopeless storm of AI bots drowning out any human voices… might actually just be adding more negative space to a picture where the small amount of human places on the internet by comparison starkly stand out as positive spaces EVEN MORE and people become EVEN MORE likely to flock to those places because the difference just keeps getting more and more undeniable.
Lastly I want to rephrase this in a way that hopefully inspires you, every step the rich and authoritarians of the world take to push the internet takes towards being dead makes the power of our words on the fediverse increase all by itself, you don’t have to do anything but keep being ernest, vulnerable and human. Imagine you have been cranking a wind-up flashlight and feeling impotent because there were harsh bright flourescent lights glaring over you in the room… but now the lights just got cut and for the first time people can see clearly the light you bring in its full power and love for humanity.
That is the moment I believe we are in RIGHT NOW
Great insights, thank you.
Only if you don’t consider capitalism.
Hot take: what most people call AI (large language and diffusion models) is, in fact, part of peak capitalism:
- relies on ill gotten gains (training data obtained without permission, payment or licensing)
- aims to remove human workers from the workforce within a system that (for many) requires them to work because capitalism has removed the bulk of social safety netting
- currently has no real route to profit at any reasonable price point
- speculative at best
- reinforces the concentration of power amongst a few tech firms
- will likely also result in regulatory capture with the large firms getting legislation passed that only they can provide “AI” safely
I could go on but hopefully that’s adequate as a PoV.
“AI” is just one of cherries on top of late stage capitalism that embodies the worst of all it.
So I don’t disagree - but felt compelled to share.
Kinda like people focusing on petty crime and ignoring the fact that corporations steal billions from us.
We as a society give capitalism such a blanket pass, that we don’t even consider what it actually is.
When you live in a cage, you think of the bars as part of your home.
I agree but I don’t think it has much to do with the tool itself beyond it being a superb culpability obscuring tool, what we are witnessing is essentially a birth of a religion just a super lame one.
A status quo where people are not allowed to break in rhetoric or ideology as quality of life plummets will create a rising potential for the introduction of a concept that nullifies or simply exists outside the framework of that status quo worldview that is strangling people but they do not believe they can escape.
For people who are obsessed with AI, the unreality of it is precisely what lets AI function as an ideological lifeboat that doesn’t require grappling with the immense issues with our worldviews and assumptions as a society because AI can be added to contexts as a Deus Ex Machina that disguises and explains away incrongruities between our broken world views and the desperate reality caused by them.
And please broaden this beyond AI. The attention economy that comes with social media, and other forms of “tech-feudalism”, manipulation, targeting and tracking/surveillance aren’t healthy either, even if they don’t rely on AI and machine learning.
Yuval Noah Harari’s Nexus gets into this as well. It’s a really powerful tool that we are very, very ill equipped to use responsibly.
I love democracy.