But if we’re wrong about climate change we’ll have made the air breathable for no reason. ლ(ಠ益ಠლ)
But if we’re wrong about climate change we’ll have made the air breathable for no reason. ლ(ಠ益ಠლ)
LLMs can reason about information. It’s fine to call them intelligent systems.
It’s reasonable to refer to unsupervised learning as “learning on its own”.
Christopher Hitchens’ dumber brother.
An LLM trained exclusively on Facebook would be hilarious. It’d be like the Monty Python argument skit.
It’s characters from a popular TV show as knitted figures.
Which works were sampled for this?
My hypothesis is that wealth causes brain damage.
It’s an obvious overreach.
An AI generated image is essentially the solution to a math problem. Say the images are/become illegal. Is it then also illegal to possess the input to that equation? The input can be used to perfectly replicate the illegal image after all. What if I change a word in the prompt such that the subject of the generated image becomes clothed? Is that then suddenly legal?
I understand the concern, but it’s just incredibly messy to legislate what amounts to thought crimes.
Maybe we could do something to discourage distribution, but the law would have to be very carefully worded to prevent abuse.
Not so. There are plenty of use cases that already have better solutions.
The point being that Denmark also has regulations…
I live in Copenhagen, and there are new developments going up every day.
That’s absolutely not true where I live, so maybe be careful with the generalizations.
https://en.m.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.
Among non-experts, conventional wisdom holds that corporate law requires boards of directors to maximize shareholder wealth. This common but mistaken belief is almost invariably supported by reference to the Michigan Supreme Court’s 1919 opinion in Dodge v. Ford Motor Co.
Lol
I am not sure of the relevance of the oppressed classes and with the object of duping the latter is the cravings of the oppressed classes and with the object of duping the latter
Yeah, totally. Repeating the same nonsensical sentence over and over is also how I converse. 🙄
It’s fine if you think so, but then it’s a pointless argument over definitions.
You can’t have a conversation with autocomplete. It’s qualitatively different. There’s a reason we didn’t have this kind of code generation before LLM’s.
Adversus solem ne loquitor.
Does AlphaGo understand go? How about AlphaStar?
When I say LLM’s can understand things, what I mean is that there’s semantic information encoded in the network. A demonstrable fact.
You can disagree with that definition, but the point is that it’s absolutely not just autocomplete.
I can tell GPT to do a specific thing in a given context and it will do so intelligently. I can then provide additional context that implicitly changes the requirements and GPT will pick up on that and make the specific changes needed.
It can do this even if I’m trying to solve a novel problem.
https://en.m.wikipedia.org/wiki/Supernormal_stimulus