I agree there’s abuse, but there are laws:
Article explaining the laws used as support / Article with historical precedent.
Both in Portuguese.
There’s the possibility Starlink will refuse the order to block Twitter. I don’t use one of the major providers, so I’m still unaffected. I just learned there are twenty thousand registered smaller ones.
I was talking about how we always have this type of discussion frequently with my therapist earlier today. It’s always nice to pause and remind ourselves and those outside of our philosophy. One thing that I’d like to add is we might not be(e) nice sometimes because of personal circumstances. We are having a bad day and a comment will trigger a reaction that would be uncommon or we might be aggressive without provocation.
In cases we feel the need to hit back, I’d advise postponing the response by at least one hour. Give yourself time to clear your mind and think things over. And if you are the target of users having a bad day, reminding them that they are not be(e)ing nice is the alternative. Asking questions is the best. “Did I offend you?”, “Did I say something wrong?”, “I don’t understand what the issue is.” Even if they keep the aggression, they will point to the specific issue that needs to be worked on, or prove they don’t want to discuss genuinely.
Does it really work like that? I would say that they are not trying to fool any test, just getting harder to be detected. The goal being looking completely realistic.
There was a serious security vulnerability previous to Python 3.11 if I recall correctly. You can use pyenv to manage Python versions though: https://github.com/pyenv/pyenv
I think it’s about generating alt text for people with disabilities when they are missing from pictures.
I hate the term and the fact it became widespread. Unfortunately, mass adoption also means it will mutate and evolution will follow its course.
The obvious solution on X’s side is to ID everyone that wants to post anything. And remember that the obvious solution doesn’t have to be the best solution, a good solution or, even, a real solution at all.
Maybe people are not really choosing, just going with the only option they know/ remember. If they have to choose from a menu, the first option is very likely and I imagine randomness would be involved.
“If you have an outcome-based approach and you do not reach the goals, then you have to apply additional measures […] whereas now you say okay, I tried, but unfortunately, it didn’t turn out the way I wanted to,” Paulus explained.
Politicians and producers love good ideas that will attract the public’s attention, but should be tweaked just enough to not be executed as intended.
After ruining some installations and learning some more, I started questioning the fact that pyenv and some venv management are not taught at the beggining.
He was, uh, totally asking for it.
I’ll admit that I got confused. If you visit the site, the article is a response to the research that says women also hit men. I’d argue they simply chose stories of men beating women, flipped the gender and wanted people to be outraged.
Telegram is the same. It’s the app people will migrate to because it’s the app people learned to use when WhatsApp can’t operate for some reason. Not many people there. People here are overly attached.
For the people who suggest users just change apps. Imagine I just ban all your current forms of text communication (you can still have e-mail), but only you, your family and friends will keep their ecosystems. Do you care you won’t talk to them anymore? Can you convince them to use a new app? Does it affect your life beyond social interactions? Is it worth making your life harder?
The article didn’t go in the direction I expected. Theoretically, open source software can be fixed by experts outside of the main company, but it would be very niche. The expert would need to be familiar with the specific hardware at least, have varying degrees of medical knowledge and have access to the individual in need in some cases.
Forced updates and treating medical software as no more special than a game is the problem when dealing with apps. Tag medicals apps and make it so that system updates have to be manual or go through warnings before being deployed. Offer the option to go back to a version that previously worked. Create regulations to make companies liable for malfunctions.
The problem that I see is that power comes in great part from the responsibility to educate yourself. In a community, you don’t have to know everything to contribute to its workings, but someone has, enough people do you escape the clutches of external players. Everything is quite individualist right now though. Things must just work without the help of anyone.
They can block access to the site if they don’t comply. Then people use VPN.
I don’t think it’s the same concern. It’s not that people will become pedophiles or act on it more because of the normalization and exposure. It’s people will see less of a problem with the sexualization of children. The parallel being the amount of violence we are OK being depicted. The difference being we can only emulate in a personal level the sexual side.
Maybe there’s the argument that violence is escapist, sexual desire is ever present and porn is addictive.
That’s really curious. LLM were usually on the other side of this note and not considered the traditional AI people referred to.
I thought the same. Now plataforms have a target audience to focus. The accounts move, the artists have to follow, the rest has a reason to move as well.