cultural reviewer and dabbler in stylistic premonitions

  • 18 Posts
  • 169 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle
  • big oof.

    We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.

    the author of this substack is woefully misinformed about the state of technology 🤦

    it has, in fact, been possible for several years already for anyone to quickly generate convincing images (not to mention videos) of fictional scenes in real locations with very little effort.

    The photograph—which appeared on the Associated Press feed, I think—was simply taken from a higher vantage point.

    Wow, it keeps getting worse. They’re going full CSI on this photo, drawing a circle around a building on google street view where they think the photographer might have been, but they aren’t even going to bother to try to confirm their vague memory of having seen AP publishing it? wtf?

    Fwiw, I also thought the image looked a little neural network-y (something about the slightly less-straight-than-they-used-to-be lines of some of the vehicles) so i spent a few seconds doing a reverse image search and found this snopes page from which i am convinced that that particular pileup of cars really did happen as it was also photographed by multiple other people.



  • Funny that blog calls it a “failed attempt at a backdoor” while neglecting to mention that the grsec post (which it does link to and acknowledges is the source of the story) had been updated months prior to explicitly refute that characterization:

    5/22/2020 Update: This kind of update should not have been necessary, but due to irresponsible journalists and the nature of social media, it is important to make some things perfectly clear:

    Nowhere did we claim this was anything more than a trivially exploitable vulnerability. It is not a backdoor or an attempted backdoor, the term does not appear elsewhere in this blog at all; any suggestion of the sort was fabricated by irresponsible journalists who did not contact us and do not speak for us.

    There is no chance this code would have passed review and be merged. No one can push or force code upstream.

    This code is not characteristic of the quality of other code contributed upstream by Huawei. Contrary to baseless assertions from some journalists, this is not Huawei’s first attempt at contributing to the kernel, in fact they’ve been a frequent contributor for some time.












  • I was referring to the “This is actually a good sign for self driving” part of their comment.

    The captcha circumvention arms race has been going on for over two decades, and every new type of captcha has and will continue to be broken as soon as it’s widely deployed enough that someone is motivated to spend the time to.

    So, the notion that an academic paper about breaking the current generation of traffic-related captchas (something which the captcha solving industry has been doing for years with a pretty high success rate already) is “good news” for the autonomous vehicle industry (who has also been able to identify such objects well enough to continue existing and getting more regulatory approval for years now) is…

    fry not sure meme template, no text