• 0 Posts
  • 269 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • Yeah, I think you could be right there, actually. My instinct on this from the start is that it would prevent the grieving process from completing properly. There’s a thing called the gestalt cycle of experience where there’s a normal, natural mechanism for a person going through a new experience, whether it’s good and bad, and a lot of unhealthy behaviour patterns stem from a part of that cycle being interrupted - you need to go through the cycle for everything that happens in your life, reaching closure so that you’re ready for the next experience to begin (most basic explanation), and when that doesn’t happen properly, it creates unhealthy patterns that influence everything that happens after that.

    Now I suppose, theoretically, there’s a possibility that being able to talk to an AI replication of a loved one might give someone a chance to say things they couldn’t say before the person died, which could aid in gaining closure… but we already have methods for doing that, like talking to a photo of them or to their grave, or writing them a letter, etc. Because the AI still creates the sense of the person still being “there”, it seems more likely to prevent closure - because that concrete ending is blurred.

    Also, your username seems really fitting for this conversation. :)




  • Given the husband is likely going to die in a few weeks, and the wife is likely already grieving for the man she is shortly going to lose, I think that still places both of them into the “vulnerable” category, and the owner of this technology approached them while they were in this vulnerable state. So yes, I have concerns, and the fact that the owner is allegedly a friend of the family (which just means they were the first vulnerable couple he had easy access to, in order to experiment on) doesn’t change the fact that there are valid concerns about the exploitation of grief.

    With the way AI techbros have been behaving so far, I’m not willing to give any of them the benefit of the doubt about claims of wanting to help rather than make money - such as using a vulnerable couple to experiment on while making a “proof of concept” that can be used to sell this to other vulnerable people.



  • Think of how many family recipes could be preserved. Think of the stories that you can be retold in 10 years. Think of the little things that you’d easily forget as time passes.

    An AI isn’t going to magically know these things, because these aren’t AIs based on brain scans preserving the person’s entire mind and memories. They can learn only the data they’re told. And fortunately, there’s a much cheaper way for someone to preserve family recipies and other memories that their loved ones would like to hold onto: they could write it down, or record a video. No AI needed.


  • Sure, you should be free to make one. But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!), there are valid questions about whether that will cause them harm rather than help - and grieving people do not always make the most rational decisions. They can very easily be convinced that interacting with AI-you would be good for them, but it actually prolongs their grief and makes them feel worse. Grieving people are vulnerable, and I don’t think AI companies should be free to prey on the vulnerable, which is a very, very realistic outcome of this technology. Because that is what companies do.

    So I think you need to ask yourself not whether you should have the right to make an AI version of yourself for those who survive your death… but whether you’re comfortable with the very likely outcome that an abusive company will use their memories of you to exploit their grief and prolong their suffering. Do you want to do that to people you care about?


  • Just gonna say that I agree with you on this. Humans have evolved over millions of years to emotionally respond to their environment. There’s certainly evidence that many of the mental health problems we see today, particularly at the scale we see, is in part due to the fact that we evolved to live in a very different way to our present lifestyles. And that’s not about living in cities rather than caves, but more to do with the amount of work we do each day, the availability and accessability of essential resources, the sense of community and connectedness with small social groups, and so on.

    We know that death has been a constant of our existence for as long as life has existed, so it logically follows that dealing with death and grief is something we’ve evolved to do. Namely, we evolved to grieve for a member of our “tribe”, and then move on. We can’t let go immediately, because we need to be able to maintain relationships across brief separations, but holding on forever to a relationship that can never be continued would make any creature unable to focus on the needs of the present and future.

    AI simulacrums of the deceased give the illusion of maintaining the relationship with the deceased. It is certainly well within the possibility that this will prolong the grieving process artificially, when the natural cycle of grieving is to eventually reach a point of acceptance. I don’t know for sure that’s what would happen… but I would want to be absolutely sure it’s not going to cause harm before unleashing this AI on the general public, particularly vulnerable people (which grieving people are.)

    Although I say that about all AI, so maybe I’m biased by the ridiculous ideology that new technologies should be tested and regulated before vulnerable people are experimented on.





  • That would probably work for hobbyists, but I have my doubts that professionals, who rely on Adobe products for their livelihood, could use unsuitable software for years in the hopes that volunteer devs will eventually add the features they need. In the other post about this topic, someone commented that GIMP’s devs are refusing to fix problems that are repelling new users, which is not going to encourage Adobe users to make the switch. GIMP still doesn’t have fully functioning, reliable non-destructive editing, which is 100% essential for anyone beholden to a boss or client who is going to change their minds a couple of times between now and next month.

    Adobe is big because of their userbase, but their userbase is big because they make genuinely powerful software that fits the needs of professionals. The free options (and the cheap proprietary options) are not there yet, and probably never will be. Professionals aren’t going to switch until the features they need are there (because seriously, why would anyone use a tool for their job that doesn’t actually allow them to do their job properly?), but the features aren’t going to be added until the professionals switch over. Catch22.


  • Been a while since I used Krita, so it’s hard to compare Krita from 3 or 4 years ago with Photoshop 2023, but it was okay. Better than GIMP, but unless there’s been some major changes, it doesn’t have anywhere near the versatility in tools and filters as Photoshop.

    This feels like the key difference between Photoshop and the others. There’s an awful lot of stuff that previously I would have to do manually, sometimes over several hours, that Photoshop can do in seconds, either because there’s a tool or filter for it, or sometimes just because Photoshop is so much more responsive. This is really hard to quantify in an objective way, far more so than pointing out whether a feature is present or absent, but… I use an art tablet and Photoshop just responds to the pen better.

    So like it’s not really that it’s impossible to do amazing work with the free apps, it’ll just take a lot longer. I liked your analogy in your other comment, about the e-bike vs pickup truck: you definitely can move that half a ton of crushed stone with an e-bike, but it’ll be quicker and less work with a pickup truck.



  • I have to agree. I’ve used a great many software packages over the years, but having been given an Adobe Creative Cloud subscription by my university, as several of Adobe’s programs are required for the degree I’m doing, I’ve been very annoyed to discover that the alternatives really aren’t on the same level. They work, sure. You can get the job done with them. But I am genuinely finding Photoshop to be significantly more powerful than everything else I’ve used. And it’s really annoying because I’ve never liked Adobe as a company.



  • frog 🐸@beehaw.orgtoProgramming@beehaw.orgAI layoffs
    link
    fedilink
    English
    arrow-up
    15
    ·
    5 months ago

    When AI can sit in a large chair and make money off the backs of others all day

    Arguably this is the only thing AI can do. Would AI even exist if not for the huge datasets derived from other people’s hard work? All the money AI will generate is based exclusively off the backs of others.



  • UK citizens can also opt out, as the Data Protection Act 2018 is the UK’s implementation of GDPR and confers all of the same rights.

    In my opt out, I have also reminded them of their obligation to delete data when I do not consent to its use, so since I have denied consent, any of my data that has been used must be scrubbed from the training sets and resulting AI outputs derived from the unauthorised use of my data.

    Sadly, having an Instagram account is unavoidable for me. Networking is an important part of many creatives’ careers, and if the bulk of your connections are on Instagram, you have to be there too.


  • Well, let’s see about the evidence, shall we? OpenAI scraped a vast quantity of content from the internet without consent or compensation to the people that created the content, and leaving aside any conversations about whether copyright should exist or not, if your company cannot make a profit without relying on labour you haven’t paid for, that’s exploitation.

    And then, even though it was obvious from the very beginning that AI could very easily be used for nefarious purposes, they released it to the general public with guardrails that were incredibly flimsy and easily circumvented.

    This is a technology that required being handled with care. Instead, its lead proponents are of the “move fast and break things” mentality, when the list of things that can be broken is vast and includes millions of very real human beings.

    You know who else thinks humans are basically disposable as long as he gets what he wants? Putin.

    So yeah, the people running OpenAI and all the other AI companies are no better than Putin. None of them care who gets hurt as long as they get what they want.