Eeeeeh maybe not “CP settings”…
Eeeeeh maybe not “CP settings”…
Theoretically we could slow down training and coast on fine-tuning existing models. Once the AI’s trained they don’t take that much energy to run.
Everyone was racing towards “bigger is better” because it worked up to GPT4, but word on the street is that raw training is giving diminishing returns so the massive spending on compute is just a waste now.
What resonated with me is people calling LLMs and Stable Diffusion “copyright laundering”. If copyright ever swung in AI’s favor it would be super easy to train an AI on stuff you want to steal, add in some generic training, and now you have a “new” piece of art.
LLMs and Stable Diffusion are just compression algorithms for abstract patterns, only one level above data.
Huh, didn’t know that! I mainly mentioned it for the fact that it was crammed into products that didn’t need it, like fridges and toasters where it’s usually seen as superfluous, much like AI.
I wonder if we’ll start seeing these tech investor pump n’ dump patterns faster collectively, given how many has happened in such a short amount of time already.
Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.
It feels like the futurism sheen has started to waver. When everything’s a major revolution inserted into every product, then isn’t, it gets exhausting.
With streaming services they’re proving it’s not viable to run a resource hog of a service with a measly monthly subscription.
With social media they’re proving it’s not viable to run a resource hog of a service for free, even with advertisement.
So naturally the best plan to monetize AI is to run a resource hog of a service with a measly monthly subscription and a free version without advertisements. /s
To get more direct to the point you could use those unrendered dummy links to ban whatever IPs click them.
With the vast amounts of training data and how curated they’re becoming (Llama and Claude are going that direction) it’s infeasible to actually poison a large model to this degree.
The infrastructure would be things like fiber cable wired to each house.
But in this scenario, the ISPs would be manning the servers that your connection is routed through. So they’d still have massive influence on the speed and data.
If the government owned the servers, they could block and track down anything against state interest.
Not saying they can’t do that anyways, but at least the third party makes the process more difficult, less seamless, and gives the chance of new competitors.
So they’re ending support but will use the remaining users like test guinea pigs.
Great…
I feel like shorting will always be riskier than normal investing. With stocks you have people at the company doing their best to raise that stock. With Shorts you are betting against a company that’s trying to survive.
The chances of the CEO pulling something out of their ass, dubious or not, to maintain their profits is too high.
Ironically the business people are terrible at business. I genuinely think LLMs (despite their economic evils) are stunning pieces of technology.
But they are money sinks and the only plans for profit are subscriptions or advertisements. It’s Social Media/Streaming/Tech Startups panicked hype investing all over again. Subscriptions and advertising just simply do not pay the bills for huge server and gpu farms.
But sustainability isn’t what they want is it? They want the stock to go up to then cash out when it’s about to fall. sigh
Microsoft’s bread and butter has been selling and servicing to businesses.
So with that in mind, the hell are they thinking? Windows 10 end of life guarantees that businesses specifically will have to switch. Then the next option in line is one that will by default vacuum up all your proprietary information to feed into an AI, effectively “copyright laundering” it?.
Even if there’s ways to deactivate the feature, the non-tech savvy managers will just go off of the headlines and the tech savvy ones will recognize the security risk. And government/healthcare computer might just fork Linux into a non-open source version.
Ironically it feels like they’re focusing too much on consumers (on extorting them) and shooting themselves in the foot for their business clientele.
It’s a little funny how everyone sobered up from perpetually investing in unprofitable free social media then they dove right back in to perpetually investing in LLMs with no real plan for sustainable profit.
In terms of LLM hallucination, it feels like the name very aptly describes the behavior and severity. It doesn’t downplay what’s happening because it’s generally accepted that having a source of information hallucinate is bad.
I feel like the alternatives would downplay the problem. A “glitch” is generic and common, “lying” is just inaccurate since that implies intent to deceive, and just being “wrong” doesn’t get across how elaborately wrong an LLM can be.
Hallucination fits pretty well and is also pretty evocative. I doubt that AI promoters want to effectively call their product schizophrenic, which is what most people think when hearing hallucination.
Ultmately all the sciences are full of analogous names to make conversations easier, it’s not always marketing. No different than when physicists say particles have “spin” or “color” or that spacetime is a “fabric” or [insert entirety of String theory]…
On Discord though there’s a lot of unchecked predation. Theoretically if this were implemented it would let them see the most suspicious users that interact with an unusual amount of children and review if the messages are inappropriate.
But all that’s unlikely because if they actually cared they’d implement other simpler solutions first. So this idea is just hypothetical but not ideal.
I’m a bit annoyed at all the people being pedantic about the term hallucinate.
Programmers use preexisting concepts as allegory for computer concepts all the time.
Your file isn’t really a file, your desktop isn’t a desk, your recycling bin isn’t a recycling bin.
[Insert the entirety of Object Oriented Programming here]
Neural networks aren’t really neurons, genetic algorithms isn’t really genetics, and the LLM isn’t really hallucinating.
But it easily conveys what the bug is. It only personifies the LLM because the English language almost always personifies the subject. The moment you apply a verb on an object you imply it performed an action, unless you limit yourself to esoteric words/acronyms or you use several words to overexplain everytime.
The gender thing is creepy, but if they could predict age groups then in a perfect world they could analyze adult users talking to children and shut that down.
In a perfect world though, I doubt they’d put effort into making their app safer, heavens no.
I hate that it’s links are “incompatible” with Firefox, even though if you trick it into thinking it’s Chrome, it works just fine.
The people can, but companies still need some kind of income to exist. The owners/ceos will just golden parachute away from the corpse
In order to tangibly pay employees/rent/servers a company needs either profits, subsidies, or
a ponzi schemeinflated stocks.