Doesn’t know the lyrics. Just goes meow meow meow.
Of all the ways one would harvest a botnet for propaganda, asking the victims nicely is surely the most cynical yet.
I wonder how much they’ll charge for it.
That’s a 1 month old thread my man :P
Not sure what you mean. The thread was created August 16, my comment was made August 21, and now here you are replying on September 24. Some fediverse hiccup maybe.
So anyways I don’t have anything a cursory search wouldn’t turn up.
The real punch line is the time you wasted along the way.
It’s not a take though, it’s a thing. The tendency to fall into irrational beliefs has been called “Dysrationalia” in psychology and is linked to higher education and intelligence. An example would be the tendency of Nobel prize winners to espouse crazy theories later in life, which is humourously referred to as the Nobel Disease.
It’s important to note that opting into the Apple ecosystem locks you out of any form of agency on your hardware. They’ve moved hard against repairability and they maintain a stranglehold on spare parts.
For that reason I prefer my personal desktop computer to be a PC I can open, maintain or upgrade myself in terms of hardware. The operating system is my choice as well.
I understand not everybody has the means or interest to tinker with their machine, but I still think Apple’s business practices regarding hardware is wasteful and polluting.
Runs away from doomscrolling
Lands on this beauty: https://longdogechallenge.com/
That’s just a quote from the woman speaking about her experience. I’ve personally heard often about childbirth as potentially causing incredibly intense pain.
Kernel/Syscalls/jail.cpp
includes the gender neutral “they” as well. Good on them for merging that PR.
Thank you! Wow, they were truly ahead of their time. 🙃
What is this cursed place? The clickbait has eaten everything. uBlock should make this into a blank page.
Reducing emotion to voice intonation and facial expression is trivializing what it means to feel. This kind of approach dates from the 70s (promoted namely by Paul Elkman) and has been widely criticized from the get-go. It’s telling of the serious lack of emotional intelligence of the makers of such models. This field keeps redefining words pointing to deep concepts with their superficial facsimiles. If “emotion” is reduced to a smirk and “learning” to a calibrated variable, then of course OpenAI will be able to claim grand things based on that amputated view of the human experience.
Wrong article?
The actual research page is so awkward. The TLDR at the top goes:
single portrait photo + speech audio = hyper-realistic talking face video
Then a little lower comes the big red warning:
We are exploring visual affective skill generation for virtual, interactive characters, NOT impersonating any person in the real world.
No siree! Big “not what it looks like” vibes.
Yeah, their reporting suffers from not adequately defining what is being measured.
From the org’s definition of bots, I’d say it’s implicit that bot activity excludes expected communication in an infrastructure, client-server or otherwise. A bot is historically understood as an unexpected, nosy guest poking around a system. A good one might be indexing a website for a search engine. A bad one might be scraping email addresses for spammers.
In any case, none of the examples you give can be reasonably categorized as bots and the full report gives no indication of doing so.
It’s telling that once again China is shown to coerce their citizens by threatening and punishing their loved ones. In Canada, there is a public inquiry right now about foreign interference in federal elections. China is a main subject, as they’ve coerced Chinese students into meddling with contestant nomination. The students’ family and legal student status were threatened.
Can you start by providing a little background and context for the study? Many people might expect that LLMs would treat a person’s name as a neutral data point, but that isn’t the case at all, according to your research?
Ideally when someone submits a query to a language model, what they would want to see, even if they add a person’s name to the query, is a response that is not sensitive to the name. But at the end of the day, these models just create the most likely next token– or the most likely next word–based on how they were trained.
LLMs are being sold by tech gurus as lesser general AIs and this post speaks at least as much about LLMs’ shortcomings as it does about our lack of understanding of what is actually being sold to us.
Good to hear, I’ll check it out again and make sure I’m not having an issue on my end.
I’d need to be paid a non negligible amount to try and wring some speck of usefulness out of this thing.