There is no way
There is no way
It’s not actually clear that it only affects huge companies. Much of open source AI today is done by working with models that have been released for free by large companies, and the concern was that the requirements in the bill would deter them from continuing to do this. Especially the “kill switch” requirement made it seem like the people behind the bill were either oblivious to this state of affairs or intentionally wanting to force companies to stop releasing the model weights and only offer centralized services like what OpenAI is doing.
If you are at the point where you are having to worry about government or corporate entities setting traps at the local library? You… kind of already lost.
What about just a blackmailer assuming anyone booting an OS from a public computer has something to hide? And then they have write access and there’s no defense, and it doesn’t have to be everywhere because people seeking privacy this way will have to be picking new locations each time. An attack like that wouldn’t have to be targeted at a particular person.
Isn’t it risky plugging usb drives into untrusted machines?
I doubt the school administrators who would be buying this thing or the people trying to make money off it have really thought that far ahead or care whether or not it does that, but it would definitely be one of its main effects.
For visibility, here is a list of ways around youtube ads the video was supposedly banned for mentioning:
Desktop:
Android:
I personally use Freetube and think it’s great
Unless it’s an emergency or you’re trying to contact a company/government entity that will stonewall you with template emails otherwise I think this is fine because if someone just calls me on the phone I’d hate it and I don’t want to inflict that on others
I bet you could do it with ring signatures
a message signed with a ring signature is endorsed by someone in a particular set of people. One of the security properties of a ring signature is that it should be computationally infeasible to determine which of the set’s members’ keys was used to produce the signature
I agree that it’s bad that there’s a false impression of privacy, but I think it would be better to allow this as an extension or something and not include it as a feature in the UI, or at least not on by default. That way people who otherwise wouldn’t bother won’t be tempted to drive themselves crazy looking for imaginary enemies.
Can anyone recommend any cool mods/projects built on top of Minetest?
The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.
The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.
The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.
The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.
They are deterministic though, in a literal sense. Rather their behavior is undefined. And yes, a LLM is not a person and it’s not quite accurate to talk about them knowing or understanding things. So what though? Why would that be any sort of evidence that research efforts into AI safety are futile? This is at least as much of an engineering problem as a philosophy problem.
This one can do that stuff: https://github.com/huchenlei/ComfyUI-layerdiffuse?tab=readme-ov-file
So it is a way for Lemmy instances to let people log in with their Reddit accounts? Neat
Seems broken
We’re unable to submit your comments to congress because of a problem on our end. We apologize for the inconvenience. Please try again later.
IMO being opposed to civil war in the US and taking that seriously is a legitimate position.
AI has honestly made me a much more powerful Linux user
The project’s authors also acknowledged that most of the proposals would require the Republican Party to control both the U.S. House of Representatives and the U.S. Senate.
In July 2024, Trump disavowed Project 2025.
There’s also how the Supreme Court recently made a ruling that limits the ability of federal agencies to legally act without more explicit direction from congress, and gives courts more ability to scrutinize their actions.
This plan would represent a terrifying slide into totalitarianism and is something we should take steps to avoid, but I think there’s a lot going against it and reasons to be hopeful that it will fail.
A while ago I read the book Swarmwise by Rick Falkvinge about the process of starting a political movement in Sweden, and some aspects of how their democracy works seemed comparatively impressive to me, and better capable of genuine representation because the barriers to getting started are not so insurmountable. Still, I’m not convinced overall of the narrative of changes to the structure of government being generally positive. You used a technology metaphor, but it’s been a clear trend for tech platforms to actually become worse over time in terms of user agency, privacy and exploitation, something that to me seems mirrored in government. A lot of what people see as solutions to problems take the form of an increase in centralized control and a weakening of barriers to that control, and I see those barriers as the ideological core of how the US was originally designed to work. A specific law might be shown to have positive results in itself, but be achieved by an unsafe concentrating of power. In particular, I think the way the executive branch has been expanding over the last century is very concerning especially with stuff like the Patriot Act and everything associated with it.
Basically, especially right now it’s clear that a lot of the people in power are malevolently insane, incompetent and demented, and it’s really important that we maintain and improve protections to keep them from doing too much damage, so I am skeptical about ideas for major reform especially when the idea is to take the shortest path to policy goals.
So I guess all that stuff they did to lock down the ability to see things on Xitter without an account was strictly for evil then