• 0 Posts
  • 58 Comments
Joined 5 months ago
cake
Cake day: June 5th, 2024

help-circle


  • the paper’s ostensibly liberal/progressive line

    They’re aligned with the Liberal party, which is a centrist party which is seldom if ever progressive. The Guardian does put up some articles by progressives, on occasion, but they also publish articles by conservatives. When the Labour Party was led by Corbyn, the Guardian was consistently critical of Labour policy and bought into the rightwing press’s phony accusations that Corbyn was antisemitic. Overall, the Guardian’s core politics are those of the metropolitan bourgeoisie, as can also be seen by their lifestyle and media commentary, as well as their general smugness. And on economic matters, their coverage is utterly useless. On that, the Economist and the FT are far superior, despite their occasionally odious politics in their editorial pages.

    I still read the Graun, though, since the rest of the British press is far, far worse.






  • Interoperability is a big job, but the extent to which it matters varies widely according to the use case. There are layers of standards atop other standards, some new, some near deprecation. There are some extremely large and complex datasets that need a shit-ton of metadata to decipher or even extract. Some more modern dataset standards have that metadata baked into the file, but even then there are corner cases. And the standards for zero-trust security enclaves, discoverability, non-repudiation, attribution, multidimensional queries, notification and alerting, pub/sub are all relatively new, so we occasionally encounter operational situations that the standards authors didn’t anticipate.








  • If a self-driving car kills someone, the programming of the car is at least partially to blame

    No, it is not. It is the use to which the system has been put that is the point at which blame can be assigned. That is what should be verified and validated. That’s where some person is signing on the dotted line that the system is fit for use for that particular purpose.

    I can write a simplistic algorithm to guide a toy drone autonomously. So let’s say I GPL it. If an airplane manufacturer then drops that code into an airliner, and fail to test it correctly in scenarios resembling real-life use of that plane, they’re the ones who fucked up, not me.






  • It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

    You’re attempting to redefine “bug.”

    Software bugs are faults, flaws, or errors in computer software that result in unexpected or unanticipated outcomes. They may appear in various ways, including undesired behavior, system crashes or freezes, or erroneous and insufficient output.

    From a software testing point of view, a correctly coded realization of an erroneous algorithm is a defect (a bug). It fails validation (a test for fitness for use) rather than verification (a test that the code correctly implements the erroneous algorithm).

    This kind of issue arises not only with LLMs, but with any software that includes some kind of model within it. The provably correct realization of a crap model is still crap.