• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle


  • I have observed people taking Rust seriously. You need to reexamine your assumptions.

    We have an evolved capability to short-circuit decisions with a rapid emotional evaluation. It means as a species we didn’t die out early [“that’s a lion; I’m a oerson; lions eat people ergo… Agh!” is not a sustainable strategy] - what’s amazing is that we can also apply it to elarned abstract things like an aestetic sense about programming languages. Such instincts aren’t always perfect, but they’re still worth paying attention to. I don’t see a reason not to express that in a blog post, but you can replace it with “this is unergonomic and in some cases imprecise” if you prefer.







  • Is this problem a recurring one after a reboot?

    If it is it warrants more effort.

    If not and you’re happy with rhe lack of closure, you can potentially fix this: kill the old agent (watch out to see if it respawns; if it does and that works, fine). If it doesn’t, you can (a) remove the socket file (b) launch ssh-agent with the righr flag (-a $SSH_AGENT_SOCK iirc) to listen at the same place, then future terminal sessions that inherit the env var will still look in the right place. Unsatisfactory but it’ll get you going again.


  • Okay, that agent process is running but it looks wedged: multiple connections to the socket seem to be opened, probably your other attempts to use ssh.

    The ssh-add output looks like it’s responding a bit, however.

    I’d use your package manager to work out what owns it and go looking for open bugs in the tool.

    (Getting a trace of that process itself would be handy, while you’re trying again. There may be a clue in its behaviour.)

    The server reaponse seems like the handshake process is close to completing. It’s not immediately clear what’s up there I’m afraid.







  • I’d be cautious about the “kill -9” reasoning. It isn’t necessarily equivalent to yanking power.

    Contents of application memory lost, yes. Contents of unflushed OS buffers, no. Your db will be fsyncing (or moral equivalent thereof) if it’s worth the name.

    This is an aside; backing up from a volume snapshot is half a reasonable idea. (The other half is ensuring that you can restore from the backup, regularly, automatically, and the third half is ensuring that your automated validation can be relied on.)




  • Casey’s video is interesting, but his example is framed as moving from 35 cycles/object to 24 cycles/object being a 1.5x speedup.

    Another way to look at this is, it’s a 12-cycle speedup per object.

    If you’re writing a shader or a physics sim this is a massive difference.

    If you’re building typical business software, it isn’t; that 10,000-line monster method does crop up, and it’s a maintenance disaster.

    I think extracting “clean code principles lead to a 50% cost increase” is a message that needs taking with a degree of context.


  • The test case purported to be bad data, which you presumably want to test the correct behaviour of your dearchiver against.

    Nothing this did looks to involve memory safety. It uses features like ifunc to hook behaviour.

    The notion of reproducible CI is interesting, but there’s nothing preventing this setup from repeatedly producing the same output in (say) a debian package build environment.

    There are many signatures here that look “obvious” with hindsight, but ultimately this comes down to establishing trust. Technical sophistication aside, this was a very successful attack against that teust foundation.

    It’s definitely the case that the stack of C tooling for builds (CMakeLists.txt, autotools) makes obfuscating content easier. You might point at modern build tooling like cargo as an alternative - however, build.rs and proc macros are not typically sandboxed at present. I think it’d be possible to replicate the effects of this attack using that tooling.