• Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

    • 0x01@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      Nah, generative ai is pretty remarkably useful for software development. I’ve written dozens of product updates with tools like claudecode and cursorai, dismissing it as a novelty is reductive and straight up incorrect

      • neon_nova@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        As someone starting a small business, it has helped tremendously. I use a lot of image generation.

        If that didn’t exist, I’d either has to use crappy looking clip art or pay a designer which I literally can’t afford.

        Now my projects actually look good. It makes my first projects look like a highschooler did them last minute.

        There are many other uses, but I rely on it daily. My business can exist without it, but the quality of my product is significantly better and the cost to create it is much lower.

            • Ledericas@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              19 days ago

              You’re confusing ai art with actual art, like rendered from illustration and paintings

              • desktop_user@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                19 days ago

                it’s as much “real” art as photography, taking a relatively finite number of decisions and finding something that looks “good”.

                • snooggums@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  18 days ago

                  Really good photography is actually pretty hard and the best photographers are in high demand.

                  It involves a ton of settings for the camera, frequently post processing to balance out anything that wasn’t perfect during the shoot. Plus there is a ton of blocking, lighting, and if doing portraits and other planned shoots there is a lot of directing involved in getting the subjects to be in the right positions/showing the right emotions, etc. Even shooting nature requires a massive amount of planning and work beyond a few camera settings.

                  Hell, even stock photos tend to be a lot of work to set up!

                  If you think that someone taking a photo in focus with adequate lighting and posted it to instagram is the same as professional photography, then you have no idea what is involved.

  • LostXOR@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    20 days ago

    I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).

      • morgunkorn@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        trust me bro, we’re almost there, we just need another data center and a few billions, it’s coming i promise, we are testing incredible things internally, can’t wait to show you!

          • LostXOR@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            19 days ago

            Around a year ago I bet a friend $100 we won’t have AGI by 2029, and I’d do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that’s still dumber than the average human. In comparison humans are “trained” with maybe ten thousand “tokens” and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.

            • pixxelkick@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 days ago

              Humans are “trained” with maybe ten thousand “tokens” per day

              Uhhh… you may wanna rerun those numbers.

              It’s waaaaaaaay more than that lol.

              and take only a couple dozen watts for even the most complex thinking

              Mate’s literally got smoke coming out if his ears lol.

              A single Wh is 860 calories…

              I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.

              1. Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.

              2. An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.

              3. While yes, an AI costs substantially more Wh, it also is done in weeks so it’s obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it’d prolly require way WAY more than 13,000 Wh during the process for similiar reasons.

              4. Once trained, a single model can be duplicated infinitely. So it’d be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it…

              5. Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.