Imagine how much power is wasted on this unfortunate necessity.
Now imagine how much power will be wasted circumventing it.
Fucking clown world we live in
On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.
I just want to keep using uncensored AI that answers my questions. Why is this a good thing?
Because it only harms bots that ignore the “no crawl” directive, so your AI remains uncensored.
Good I ignore that too. I want a world where information is shared. I can get behind the
don’t worry, information is still shared. but with people. not with capitalist pigs
Capitalist pigs are paying media to generate AI hatred to help them convince you people to get behind laws that all limit info sharing under the guise of IP and copyright
Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.
The training data for LLMs come from the internet, and the internet is full of idiots.
That’s what I do too with less accuracy and knowledge. I don’t get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food
Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.
It’s like telling cavemen they don’t need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.
The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.
Wasteful?
Energy production is an issue. Using that energy isn’t. LLMs are a better use of energy than most of the useless shit we produce everyday.
Did the LLMs tell you that? It’s not hard to look up on your own:
Data centers, in particular, are responsible for an estimated 2% of electricity use in the U.S., consuming up to 50 times more energy than an average commercial building, and that number is only trending up as increasingly popular large language models (LLMs) become connected to data centers and eat up huge amounts of data. Based on current datacenter investment trends,LLMs could emit the equivalent of five billion U.S. cross-country flights in one year.
Far more than straightforward search engines that have the exact same information and don’t make shit up half the time.
I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?
Because you are coming from the perspective of a reasonable person
These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already
That’s what they are saying though. These shouldn’t be thought of as “rules”, they are suggestions near universally designed to point you to the most relevant content. Ignoring them isn’t “stealing something not meant to be captured”, it’s wasting time and resources of your own infra on something very likely to be useless to you.
They want everything, does it exist, but it’s not in their dataset? Then they want it.
They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that
this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hogging the bandwidth.
deleted by creator
This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?
The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.
That’s just BattleBots with a different name.
You’re not wrong.
No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.
Not exactly how I expected the AI wars to go, but I guess since we’re in a cyberpunk world, we take what we get
Next step is an AI that detects AI labyrinth.
It gets trained on labyrinths generated by another AI.
So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn’t get lost.
It’s gonna be AI all the way down.
All the while each AI costs more power than a million human beings to run, and the world burns down around us.
The same way they justify cutting benefits for the disabled to balance budgets instead of putting taxes on the rich or just not giving them bailouts, they will justify cutting power to you before a data centre that’s 10 corporate AIs all fighting each other, unless we as a people stand up and actually demand change.
Vote Blue No Matter Who
Any Democrat is Better than Any Republican
Plenty of Democrats are voting to put trump nominees in office, plenty are voting on partisan spending bills. The CR vote should tip you off that any democrat is not better than any republican… half of them are complicit too. 10 Senate Dems just financed this authoritarian takeover.
Not a single Democrat voted to confirm Hegseth and 3 Republicans also didnt but he still got confirmed.
Every single Democrat was present and voted no for the Budget which passed the House and it still passed.
Even if 10 dems voted not to shutdown government and enter congressional recess, the CR only exists because Republicans wrote it and won’t compromise.
Any Democrat is Better than Any Republican.
Scheumer rubberstamped autocracy by not filibustering the CR. I think anyone who protects the constitution and their constituents is better than someone who doesn’t. Not that any repuclicans fit the bill, but its not like we can just trust any old democrat. Look at Gavin Newsome sliding to the right to maintain power. That the kinda dems we want?
Relevant excerpt from part 11 of Anathem (2008) by Neal Stephenson:
Artificial Inanity
Note: Reticulum=Internet, syndev=computer, crap~=spam
“Early in the Reticulum—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.
“Crap, you once called it,” I reminded him.
“Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”
“What is good crap?” Arsibalt asked in a politely incredulous tone.
“Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.”
“As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R.”
“Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”
“So, are Artificial Inanity systems still active in the Rampant Orphan Botnet Ecologies?” asked Arsibalt, utterly fascinated.
“The ROBE evolved into something totally different early in the Second Millennium,” Sammann said dismissively.
“What did it evolve into?” Jesry asked.
“No one is sure,” Sammann said. “We only get hints when it finds ways to physically instantiate itself, which, fortunately, does not happen that often. But we digress. The functionality of Artificial Inanity still exists. You might say that those Ita who brought the Ret out of the Dark Age could only defeat it by co-opting it. So, to make a long story short, for every legitimate document floating around on the Reticulum, there are hundreds or thousands of bogus versions—bogons, as we call them.”
“The only way to preserve the integrity of the defenses is to subject them to unceasing assault,” Osa said, and any idiot could guess he was quoting some old Vale aphorism.
“Yes,” Sammann said, “and it works so well that, most of the time, the users of the Reticulum don’t know it’s there. Just as you are not aware of the millions of germs trying and failing to attack your body every moment of every day. However, the recent events, and the stresses posed by the Antiswarm, appear to have introduced the low-level bug that I spoke of.”
“So the practical consequence for us,” Lio said, “is that—?”
“Our cells on the ground may be having difficulty distinguishing between legitimate messages and bogons. And some of the messages that flash up on our screens may be bogons as well.”
Read Anathema last year, really enjoyed it!
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
Here’s the key distinction:
This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.
I’m a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
That is simply not how “AI” models today are structured, and that is entirely a fabrication based on science fiction related media.
The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it’s been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.
You must be fun at parties.
- Say something blatantly uninformed on an online forum
- Get corrected on it
- Make reference to how someone is perceived at parties, an entirely different atmosphere from an online forum, and think you made a point
Good job.
- See someone make a comment about a AI going rogue after being forced to produce too much goblin tentacle porn
- Get way to serious over the factual capabilities of a goblin tentacle porn generating AI.
- Act holier than thou over it while being completely oblivious to comedic hyperbole.
Good job.
Whats next? Call me a fool for thinking Olestra stains are capable of sentience and thats not how Olestra works?