• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • I don’t disagree, but don’t pretend you haven’t effectively set up the equal and opposite thing here. No mods will ban anyone but other than that every comment section is an implicit competition for best pro-Palestinian talking point, even when decency demands otherwise. We don’t talk about Oct 7, and if we do it was friendly fire, and if it wasn’t it was a natural consequence of Israeli policy in Gaza and that is the real issue. Yeah fine we admit the attack was not a hundred percent morally sound if you insist so much, but we don’t assign a moral weight to it or linger on it because hey when you make innocents suffer, you sow the wind and eventually reap the whirlwind, oh sure Hamas’ response was ugly but what can you do, you know, be a bastard and it comes around. Now it is our moral duty to call loud and clear for a ceasefire – the cycle of violence must stop.

    I know what you’re thinking: that’s not fair! That’s not my opinion! Yeah, the circlejerk doesn’t care about your private opinion. You know better than to contradict any of the above around here in writing, and that’s enough. I’m sure a lot of people privately think “oh… tbh that last IDF strike was unconscionable” before posting on /r/worldnews the part of their opinion they know the crowd will like better.






  • The prime problem is that every social space eventually becomes a circlejerk. Bots and astroturfing exacerbate the problem but it exists perfectly fine on its own – in the early 2000s I had the misfortune of running across plenty of gigantic, years-long circlejerks where definitely no bots or nefarious foreign manipulators were involved (I’m talking console wars, Harry Potter ship wars, stupid shit like that). People form circle jerks in the same way that salts form crystals. It’s just in their nature.

    The thing with circlejerks isn’t that there’s overwhelming agreement on some subject. You’ll get dunked on in most any social media space for claiming that the Earth is flat or that Putin is a swell guy, that in itself is obviously not a problem. What makes a circlejerk is that takes get cheered for and upvoted not in proportion to how much they are anchored in reality, but in proportion to how useful they are in galvanizing allies and disrupting enemies. Whoever shouts “glory to the cause” in the most compelling way gets all the oxygen. At that point the amount of brain rot is only going to increase. No matter how righteous the cause, inevitably there comes the point where you can go on the Righteous Cause Forum and post “2+2=5, therefore all glory to the cause” and get 400 upvotes.

    Everyone talks a big game about how much they like truth, reason and moral consistency, but in the end when it’s just them and the upvote button and “do I stop and honestly examine this argument that gives me warm fuzzy feelings”, “is it really fair to dunk on Hated Group X by applying a standard I would never apply to anyone else” – the true colors show. It’s depressing and it makes most of social media into information silos where totalizing ideologies go to get validated, and if you feel alienated by this then clearly that space isn’t for you.



  • I do exactly this kind of thing for my day job. In short: reading a syntactic description of an algorithm written in assembly language is not the equivalent of understanding what you’ve just read, which in turn is not the equivalent of having a concise and comprehensible logical representation of what you’ve just read, which in turn is not the equivalent of understanding the principles according to which the logical system thus described will behave when given various kinds of input.


  • Even the bluest and whitest Israeli apologist, convinced that the Israelis are the good guys in this conflict will – if they’re being honest – tell you: “Hamas started a war and is hiding behind these civilians as human shields, so this is what happens, do not expect us to stay our hand to prevent it, or to take responsibility for it, what if it was your country in this position, you would change your tune real quick”, etc etc etc. In essence, welcome to the real world, where this sort of thing can just happen and we do not have the ethical tools or framework to make it not happen. This is depressing as fuck.

    A lot of Israelis imagine that in the aftermath of all of this Gaza will lose the capacity to launch another 7/10 and ‘learn its lesson’ which in itself will magically lead to a bright and peaceful future for the region. Somehow I am not so optimistic. Pragmatically speaking the Israelis themselves are in no position to say “now that we’ve bombed you, let us uplift you” but egads, someone should do something. The knowledge that even after Israel decides it has done enough and winds down its Gaza operation apparently no sane governing body wants to take responsibility for Gaza saddens me to no end. These people just deserved better, I don’t care how much they cheered for 7/10 or whatever. There can be no justice or peace without compassion


  • This is an issue that has plagued the machine learning field since long before this latest generative AI craze. Decision trees you can understand, SVMs and Naive Bayes too, but the moment you get into automatic feature extraction and RBF kernels and stuff like that, it becomes difficult to understand how the verdicts issued by the model relate to the real world. Having said that, I’m pretty sure GPTs are even more inscrutable and made the problem worse.




  • I’m doing a nothing playthrough, focused around progressing the plot by doing as little as possible. I finished act 1 by heading to the mountain pass and sneaking past all the gith and undead (this required a potion of invisibility). Went to Last Light, spoke to no one then went immediately back out, skipping Moonrise Towers and the entire shadow curse theme park, and going straight to the temple. Sneaked past everyone there and very disappointed to find B waiting for me at the end anyway, how did he know I was there? Well I’m like level 3 or something so I cheese him with a silence spell before he has the chance to get going then shove him off a cliff. Up next, sneaking into Moonrise Towers without anyone noticing and then guilt tripping the guy in charge because Selune Shar Myrkul knows it’s not a fight I’m going to win. AFAIK this is where the fun and games end though, because the final phase of the act II boss requires that you actually be capable of doing something, which my MC for this run canonically isn’t.



  • Evidently no one warned you how this game plays and what it’s about, and you walked in expecting Mass Effect or something.

    That is not the experience Disco Elysium delivers, or attempts to deliver. If you want a game about being the savior of the Eight Kingdoms and immediately bringing your combat might and razor-sharp wit to bear on the scene a moment after you arrive, you can play approximately every other RPG ever.


  • Yes, definitely. It instigated a lot of turmoil and a gamut of spicy takes regarding the fundamental question of whether password managers as a model “work”. On the one hand some people laughed at the idea of putting your password on the cloud and touted post-it notes for being a more secure alternative. On the other hand people extolled the virtues of the cryptographic model at the base of password managers, claiming that even if tomorrow the entire LastPass executive org went rogue, your password would still be safe.

    As far as I understand, the truth is more nuanced. Consider that this breach took place 9 months ago, but you’re only reading about cracked passwords now. It seems like the model did what it was supposed to do, and people behind the breach had to patiently brute-force victim master passwords. This means they got to the least secure passwords first: If you picked “19 deranged geese obliterating a succulent dutch honey jar at high noon” or whatever, you’re probably safe. But it doesn’t strike me as too wise to get complacent on account of this, either. Suppose next time the attackers get enough access to “tweak” the LastPass chrome extension to exfiltrate passwords. Now what?

    The thing is we’re stuck between a rock and a hard place with passwords. We already know it’s impractical to ask users to remember 50 different secure passwords. So assuming we solve this using a password vault, there’s no optimal place to keep it. On the cloud you get incidents like this. Outside of the cloud one day you’re going to lose your thumb drive, your machine, your whatever. “So keep a backup” but who out of your normie relatives is honestly going to do this, and do you really trust a backup you haven’t used in 5 years to work in the moment of truth? I don’t know if there is any proper solution in the immediately visible solution space, and if there is, I don’t know if anyone has the financial incentive to implement it, sell it, buy it. People say the future is in passwordless authentication, FIDO2 etc, but try to google actually using one of these for your 5 most-used accounts, you’re not going to come out of the experience very thrilled.




  • Well, fine, and I can’t fault new published material having a “no AI” clause in its term of service. But that doesn’t mean we get to dream this clause into being retroactively for all the works ChatGPT was trained on. Even the most reasonable law in the world can’t be enforced on someone who broke it 6 months before it was legislated.

    Fortunately the “horses out the barn” effect here is maybe not so bad. Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the “hands off” clause - basically like the knowledge cutoff, but forever. It’s untenable, OpenAI will be forced to cave and pay up.