• 3 Posts
  • 20 Comments
Joined 9 months ago
cake
Cake day: February 29th, 2024

help-circle
  • I have zero proof of this so take it for the musing it is, but the Internet Archive/Wayback Machine can be used to view articles that have been taken offline (sometimes for political reasons). The IA is a very accessible way to prove that once something is on the Internet, it’s out there forever. I used it in a recent post to show an Israeli newspaper article that argued Israel had a right to not just Palestine, but Lebanon, Syria, Iraq, and other territories. It was taken off the newspaper’s website a few days later, but IA had it.

    This may explain why no one is taking credit, and there are no demands. Or it could very well be another reason, including people just being assholes.









  • To people who aren’t sure if this should be illegal or what the big deal is: according to Harvard clinical psychiatrist and instructor Dr. Alok Kanojia (aka Dr. K from HealthyGamerGG), once non-consensual pornography (which deepfakes are classified as) is made public over half of people involved will have the urge to kill themselves. There’s also extreme risk of feeling depressed, angry, anxiety, etc. The analogy given is it’s like watching video the next day of yourself undergoing sex without consent as if you’d been drugged.

    I’ll admit I used to look at celeb deepfakes, but once I saw that video I stopped immediately and avoid it as much as I possibly can. I believe porn can be done correctly with participant protection and respect. Regarding deepfakes/revenge porn though that statistic about suicidal ideation puts it outside of healthy or ethical. Obviously I can’t make that decision for others or purge the internet, but the fact that there’s such regular and extreme harm for the (what I now know are) victims of non-consensual porn makes it personally immoral. Not because of religion or society but because I want my entertainment to be at minimum consensual and hopefully fun and exciting, not killing people or ruining their happiness.

    I get that people say this is the new normal, but it’s already resulted in trauma and will almost certainly continue to do so. Maybe even get worse as the deepfakes get more realistic.





  • Israel is the type of control-heavy far-right state other dictators wish they could govern, and it’s made possible by Western money and technology (I was going to name just the US but my country of Canada, among others, is not blameless either). This news also sucks because there’s no way that tech is staying in Israel only. Citizens of the world better brace for convictions via AI facial recognition.

    “Our computer model was able to reconstruct this image of the defendant nearly perfectly. It got the hands wrong and one eye is off-center, but otherwise that’s clearly them committing the crime.”


  • Reddit doesn’t function like a real business (i.e. most of the work is unpaid volunteers, users and especially mods). There’s no genuine site-wide code of ethics beyond what will actually get them criminal charges. The written rules don’t matter - many moderators are unpaid bullies who permaban if their feelings are hurt and ignore questionable content they agree with. That system of banning users based on opinion kills discussion of “unapproved” views and sorts people into forums where their favorite opinions (and often outright hatreds) are popular. Loathe a particular race/gender/political ideology etc? Just find a subreddit where the mods agree and you’ll be fine saying some truly terrible stuff. Read the bloodthirsty posts on r/worldnews and tell me if the site-wide rules on not promoting violence or racism apply. For these reasons and more I don’t think anyone should be buying into their IPO because they aren’t a reliable business.

    I don’t know if Lemmy is different because I’ve been here for less than a month, but at least here it feels like you can have different opinions and the worst that happens is you eat downvotes. Plus a lot of the really unethical takes are usually checked pretty hard in my (limited) experience by the users, which doesn’t happen when the only other voices are basically guaranteed to agree with you (a la most of Reddit).

    The rest of this is just my Reddit survivor tale so if you don’t care stop here. I got invited to the IPO on the same week I got a 3-day site-wide ban after appealing a subreddit permaban for a fairly popular comment that the US should stop funding Israel and give the money to Ukraine (on a post about how the US is having trouble finding money for Ukraine). In those words, no hate speech or racism etc. When I asked why I was banned I got a 4-word insult as the only communication back. I’m not usually a conspiracy theorist, but it sure felt like I was being deliberately censored/punished for high-ish profile “dangerous” anti-Israeli opinion. May not be the case, but it was my first site-wide ban ever for a comment that broke no written rules.

    My Reddit account is 13 years old and in 2023 I think I made about 100k karma, primarily with comments about history, education, and in one case a post about how awesome sperm whales are. My experience mirrors what I’ve read happens to others enough that Reddit has lost my participation (I’ve only posted 2x in the last 3 months, down from a few times daily) and my faith. I only go back to check on specialist communities (video game tips etc) and almost never participate anymore. Frankly I hope it either changes to allow for discussion or dies.



  • It’s interesting reading quotes from that article like: “If you can’t verify what someone else has said at some other point, you’re just trusting to blind faith for artefacts that you can no longer read yourself.” and “After you’ve been dead for 100 years, are people going to be able to get access to the things you’ve worked on?”

    It reminds me of problems the US military is having with refitting/upgrading old ICBMs. From the 2021 article, “Minuteman III Missiles Are Too Old to Upgrade Anymore, STRATCOM Chief Says”: "Where the drawings do exist, “they’re like six generations behind the industry standard,” he said, adding that there are also no technicians who fully understand them. “They’re not alive anymore.”

    It’s sounds like the danger is we’ll be able to access the science (or just trust it’s true) but in some cases we’ll be unable to retrace our steps.


  • GrymEdm@lemmy.worldOPtoTechnology@lemmy.worldReview of new OpenAI Sora videos.
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    9 months ago

    It’s 100% going to change the way art and film is created. Right now OpenAI has restricted access to Sora, but that won’t last. I wonder if being a trusted reviewer is going to become more important than creation, just because so much media is going to become available on a basically daily basis. People will want to know what’s worth watching when presented with a library of hundreds of “AI home movies”.



  • GrymEdm@lemmy.worldOPtoTechnology@lemmy.worldGenerative A.I - We Aren’t Ready.
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    9 months ago

    I agree. I’m talking about how quickly we’re going to have strategies in place to deal not how quickly we’ll have it all figured out. My guess is we have at best a year before it’s a huge issue, and I agree with your take that figuring out human vs. AI content etc. is going to be an ongoing thing. Perhaps until AI gets so good it ceases to matter as much because it will be functionally the same.


  • GrymEdm@lemmy.worldOPtoTechnology@lemmy.worldGenerative A.I - We Aren’t Ready.
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    5
    ·
    edit-2
    9 months ago

    I 100% agree the genie is out of the bottle. People who want to walk back this change are not dealing with reality. AI and robotics are so valuable I very much doubt there’s even any point in talking about slowing it down. All that’s left now is to figure out how to use the good and deal with the bad - likely on a timeline of months to maybe one or two years.



  • GrymEdm@lemmy.worldOPtoTechnology@lemmy.worldGenerative A.I - We Aren’t Ready.
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    9 months ago

    This is Kyle Hill’s video on the predicted impact of AI-generated content on the internet, especially as it becomes more difficult to tell machine from human over text and video. He relays that experts say within a year huge portions of online content will be AI-generated. What do you guys think? Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?