• curiousaur@reddthat.com
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    2
    ·
    7 days ago

    They’re super conservative. I rode just once in one. There was a parked ambulance down a side street about 30 feet with it’s lights one while paramedics helped someone. The car wouldn’t drive forward through the intersection. It just detected the lights and froze. I had to get out and walk. If we all drove that conservatively we’d also have less accidents and congest the city to undrivability.

    • poopkins@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      7 days ago

      Back in February, I took a Waymo for the first time and was at first amazed. But then in the middle of an empty four lane road, it abruptly slammed the brakes, twice. There was literally nothing in the road, no cars and because it was raining, no pedestrians within sight.

      If I had been holding a drink, it would have spelled disaster.

      After the second abrupt stop, I was bracing for more for the remainder of the ride, even though the car generally goes quite slow most of the time. It also made a strange habit of drifting between lanes through intersections and using the turning indicators like it had no idea what it was doing—it kept alternating went from left to right.

      Honestly it felt like being in the car with a first time driver.

      • calcopiritus@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 days ago

        Maybe the reason they crash less is because everyone around them have to be extremely careful with these cars. Just like in my country we put a big L on the rear of the car for first year drivers.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      How long ago was that? Last year I took a couple near Phoenix and they did great, lights or no. The hardest part was dropping me off at the front of a hotel, as people were in and out and cars were everywhere. Still didn’t have issues, just slowed down to 3mph when it had 15 years left or so

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    7 days ago

    Considering the sort of driving issues and code violations I see on a daily basis, the standards for human drivers need raising. The issue is more lax humans than it is amazing robots.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      7 days ago

      it’s hard to change humans. It’s easy to roll out a firmware update.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        Raising the standards would result in 20-50% of the worst drivers being forced to do something else. If our infrastructure wasn’t so car-centric, that would be perfectly fine.

    • littlebrother@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 days ago

      :Looks at entire midwest and southern usa:

      The bar is so low in these regions you need diamond drilling bits to go lower.

        • _synack@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          6 days ago

          I have spent many years in both the midwest and the south.

          In some areas of the south, people drive extremely aggressively and there are lots of issues with compliance to various traffic laws but it is usually not difficult to get over if you need to. People will let you in. The zipper merge is a well-honed machine and almost everyone uses it and obeys it.

          In the midwest, drivers tend to me more docile, cautious, and lawful overall but have an extreme sense of entitlement over their place in line. “How dare that person use that completely empty lane to get ahead of me! Can they not see there is a line!” They will absolutely not let you in. It does not matter if the zipper merge would improve traffic flow. It just is not going to happen.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      “You don’t have to be faster than the bear, you just have to be faster than the other guy”

  • keeyanxusss@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    6 days ago

    Ah yes I’m supposed to believe an ars technica writer and a bunch of papers written by waymo itself as opposed to actual peer-reviewed studies by actual independant experts

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    As a techno-optimist, I always expected self-driving to quickly become safer than human, at least in relatively controlled situations. However I’m at least as much a pessimist of human nature and the legal system.

    Given self-driving vehicles demonstrably safer than human, but not perfect, how can we get beyond humans taking advantage, and massive liability for the remaining accidents?

  • blazeknave@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 days ago

    I used to hate them for being slow and annoying. Now they drive like us and I hate them for being dicks. This morning, one of them made an insane move that only the worst Audi drivers in my area do, a massive left over a solid yellow across no stop sign with me coming right at it before it even began acceleration into the intersection.

  • Goretantath@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 days ago

    Thing is, the end goal after sorting out all the bugs in the AI is no human druven cars since having both will only lead to crashes dur to AI being unable to predict a human. All the AI cars would be linked to a central system to communicate with eachother and alwats know where eachither are. Then all we have to do is make sure people only use the cross walks and traffic accudents will be solely due to idiots.

    • Prok@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      I doubt a central system would ever be viable, but they would certainly communicate to other nearby cars with more than just blinky lights

  • scripthook@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    7 days ago

    I live in Phoenix, Arizona and these are all around. Honestly I feel like the future everyone will have Waymo type services and no one will own cars or even need to learn how to drive one. Who needs to worry about car repairs insurance etc.

    • Neondragon25@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      I’ve rode in them a few times, fell asleep even. I trust a Waymo more than most human drivers. Best test of its capabilities I saw was when school let out and the side road was covered in kids and parents and cars in random spots waiting for people. It stayed in the “lane”, no lane lines, and calmly navigated forward as people gave it space. I was in the car the whole time. Still there are some issues to be ironed out, but ultimately I don’t think I have ever had a bad riding experience.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      edit-2
      7 days ago

      They accounted for that in this report. I believe you are a troll.

        • jsomae@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          Okay, I’m sorry. Let me clarify how it’s easy to account for the kind of bias you’re talking about. Simply divide by the population count. So, they divided the waymo crash count by the number of waymos, and the human crash count by the number of humans. This gives the waymo crash rate and the human crash rate. (In reality, it’s a bit more complicated, since the human crash rate is calculated independently each year.)

            • jsomae@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              7 days ago

              Ah. Sorry. There are some truly braindead takes on autonomous vehicles so I couldn’t tell that apart from what some people have said earnestly. My bad. 👍

              • I do think it would be much safer with zero human drivers and only autonomous vehicles on the road, for sure. But I also think it would be impractical to replace everything all at once. Even the best programmed thing would eventually encounter a human driver that defies all previously known data and freaks out the computer.

                • jsomae@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 days ago

                  I don’t know anything about how autonomous vehicles work. As far as humans doing unusual things, well assuming the human driver only steers the wheel and controls the gas and breaks, it should be possible with existing technology to avoid crashing into them at least as well as any human can. So that leaves really unusual things, like the human hopping out of their car in the middle of an intersection, as the high-hanging fruit to model. I would imagine for most of these really strange cases, even if the autonomous vehicle can’t understand what’s happening, they can at least realize that something strange is happening and then pull over.

                  Obviously there will be truly unusual situations that cause fatal collisions. So long as that is at a lower rate, then what’s the safety concern?

                  Safety is a red herring IMO, as better code can fix it. There are much worse potential problems that autonomous vehicles will cause than rare collisions. NotJustBikes has a lot of points I’d never considered before in the second half of this video. (The first half, though, I found aggravating; it’s just about solvable safety risks.)

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    204
    arrow-down
    3
    ·
    7 days ago

    That’s what happens when you have a reasonable sensor suite with LIDAR, instead of trying to rely entirely on cameras like Tesla does.

  • Curious Canid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    102
    arrow-down
    7
    ·
    7 days ago

    This would be more impressive if Waymos were fully self-driving. They aren’t. They depend on remote “navigators” to make many of their most critical decisions. Those “navigators” may or may not be directly controlling the car, but things do not work without them.

    When we have automated cars that do not actually rely on human being we will have something to talk about.

    It’s also worth noting that the human “navigators” are almost always poorly paid workers in third-world countries. The system will only scale if there are enough desperate poor people. Otherwise it quickly become too expensive.

    • Usernameblankface@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      Has anyone found the places where the navigators work to see how it goes? Has a navigator shared their experience on the web somewhere?

      I am very curious as to what they are asked to do and for how many cars And for how much money

    • Flic@mstdn.social
      link
      fedilink
      arrow-up
      28
      arrow-down
      3
      ·
      7 days ago

      @Curious_Canid @vegeta this is the case for the Amazon “just walk out” shops as well. Like Waymo they frame it as the humans “just doing the hard part” but who knows what “annotating” means in this context? And notably it’s clearly more expensive to run than they thought as they’ve decided to do Dash Carts instead which looks like it’s basically a portable self-service checkout. The customer does the checking. https://www.theverge.com/2024/4/17/24133029/amazon-just-walk-out-cashierless-ai-india

      • SippyCup@feddit.nl
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        7 days ago

        Back when I was a fabricator I made some of the critical components used in Amazon stores. Amazon was incredibly particular about every little detail, even on parts that didn’t call for tight tolerancing in any conceivable way. They, on several occasions, sent us one bad set of prints after another. Which we could only discover after completing a run of parts. We’re talking 20-30 thousand units that ended up being scrapped because of their shitty prints. Millions of dollars set on fire, basically.

        They became such a huge pain in the ass to work with we eliminated every single SKU they ordered from us.

        • lud@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          7 days ago

          Ordering components with unnecessarily small tolerances is stupid and a waste of money but of course they will complain if you can’t make the parts to the specifications.

          Why did you even take the order in the first place if you can’t manage to produce them to spec?

          • Revan343@lemmy.ca
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            7 days ago

            of course they will complain if you can’t make the parts to the specifications.

            Why did you even take the order in the first place if you can’t manage to produce them to spec?

            Where did they say anything about not being able to make the parts to spec?

          • ubergeek@lemmy.today
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            7 days ago

            Why did you even take the order in the first place if you can’t manage to produce them to spec?

            They were made to spec, but the specs were wrong.

    • Krauerking@lemy.lol
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      2
      ·
      7 days ago

      Yeah we managed to just put the slave workers behind a further layer of obfuscation. Not just relegated to their own quarters or part of town but to a different city altogether or even continent.

      Tech dreams have become about a complete lack of humanity.

      • Curious Canid@lemmy.ca
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        7 days ago

        I saw an article recently, I should remember where, about how modern “tech” seems to be focused on how to insert a profit-taking element between two existing components of a system that already works just fine without it.

    • Yoga@lemmy.ca
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      7 days ago

      The system will only scale if there are enough desperate poor people. Otherwise it quickly become too expensive.

      You can also get MMORPG players to do it for pennies per hour for in-game currency or membership. RuneScape players would gladly control 5 ‘autonomous’ cars if it meant that they could level up their farming level for free.

      The game is basically designed to be an incredibly time consuming skinner box that takes minimal skill and effort in order to maximize membership fees.

      • Gonzako@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        “Damn, I’m sorry my car killed your kids. The Carscape person didn’t get their drop”

        • Yoga@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 days ago

          The human operators are there for when the AI gets softlocked in a situation where it doesn’t know what to do and just sits there, not for regular driving.

      • Usernameblankface@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        7 days ago

        Packaging the job as a video game side quest is genius. Make so the gamer has to do several simulated runs before they connect to an actual car, and give in-game expensive consequences for messing it up

        • Yoga@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 days ago

          It doesn’t even need to be a side quest, just a second screen activity lol

          They’ll do it for pennies an hour for 12 hours a day.

    • Domi@lemmy.secnd.me
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      7 days ago

      I thought the human operators only step in when the emergency button is pressed or when the car gets stuck?

      Do they actually get driven by people in normal operation?

      • Curious Canid@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        7 days ago

        The claim is that the remote operators do not actually drive the cars. However, they do routinely “assist” the system, not just step in when there’s an emergency.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 days ago

          I think they’ve got 1 person watching dozens of cars though, it’s not 1 per car like if there was human drivers.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    5
    ·
    edit-2
    7 days ago

    Because they are driving under near ideal conditions, in areas that are completely mapped out, and guided away from roadworks and avoiding “confusing” crosses, and other traffic situations like unmarked roads, that humans deal with routinely without problem.
    And in a situation they can’t handle, they just stop and call and wait for a human driver to get them going again, disregarding if they are blocking traffic.

    I’m not blaming Waymo for doing it as safe as they can, that’s great IMO.
    But don̈́t make it sound like they drive better than humans yet. There is still some ways to go.

    What’s really obnoxious is that Elon Musk claimed this would be 100% ready by 2017. Full self driving, across America, day and night, safer than a human. I have zero expectation that Tesla RoboTaxi will arrive this summer as promised.

    • notsoshaihulud@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      7 days ago

      I have zero expectation that Tesla RoboTaxi will arrive this summer as promised.

      RoboTaxis will also have to “navigate” the Fashla hate. Not many will be eager to risk their lives with them

    • scratchee@feddit.uk
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      7
      ·
      7 days ago

      You’re not wrong, but arguably that doesn’t invalidate the point, they do drive better than humans because they’re so much better at judging their own limitations.

      If human drivers refused to enter dangerous intersections, stopped every time things started yup look dangerous, and handed off to a specialist to handle problems, driving might not produce the mountain of corpses it does today.

      That said, you’re of course correct that they still have a long way to go in technical driving ability and handling of adverse conditions, but it’s interesting to consider that simple policy effectively enforced is enough to cancel out all the advantages that human drivers currently still have.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        3
        ·
        edit-2
        7 days ago

        You are completely ignoring the under ideal circumstances part.
        They can’t drive at night AFAIK, they can’t drive outside the area that is meticulously mapped out.
        And even then, they often require human intervention.

        If you asked a professional driver to do the exact same thing, I’m pretty sure that driver would have way better accident record than average humans too.

        Seems to me you are missing the point I tried to make. And is drawing a false conclusion based on comparing apples to oranges.

        • DesertCreosote@lemm.ee
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 days ago

          Waymo can absolutely drive at night, I’ve seen them do it. They rely heavily on LIDAR, so the time of day makes no difference to them.

          And apparently they only disengage and need human assistance every 17,000 miles, on average. Contrast that to something like Tesla’s “Full Self Driving” (ignoring the controversy over whether it counts or not), where the most generous numbers I could find for it are a disengagement every 71 city miles, on average, or every 245 city miles for a “critical disengagement.”

          You are correct in that Waymo is heavily geofenced, and that’s pretty annoying sometimes. I tried to ride one in Phoenix last year, but couldn’t get it to pick me up from the park I was visiting because I was just on the edge of their area. I suspect they would likely do fine if they went outside of their zones, but they really want to make sure they’re going to be successful so they’re deliberately slow-rolling where the service is available.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            7 days ago

            Waymo can absolutely drive at night

            True I just checked it up, my information was outdated.

        • scratchee@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          6 days ago

          I specifically didn’t ignore that. My entire point was that a driver that refuses to drive under anything except “ideal circumstances” is still a safer driver.

          I am aware that if we banned driving at night to get the same benefit for everyone, it wouldn’t go very well, but that doesn’t really change the safety, only the practicality.

      • Fredthefishlord@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        driving might not produce the mountain of corpses it does today.

        And people wouldn’t be able to drive anywhere. Which could very well be a good thing, but still

    • LovableSidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      7 days ago

      I think “near ideal conditions” is a huge exaggeration. The situations Waymo avoids are a small fraction of the total mileage driven by Waymo vehicles or the humans they’re being compared with. It’s like you’re saying a football team’s stats are grossly wrong if they don’t include punt returns.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    4
    ·
    edit-2
    7 days ago

    I am once again begging journalists to be more critical of tech companies.

    But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

    […] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

    This is the wrong comparison. These are taxis, which means they’re driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it’s precisely when they’re all driving).

    We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.

    edit: The leaked data on human interventions was from Cruise, not Waymo. I’m open to self-driving cars being safer than humans, but I don’t believe a fucking word from tech companies until there’s been an independent audit with full access to their facilities and data. So long as we rely on Waymo’s own publishing without knowing how the sausage is made, they can spin their data however they want.

    edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.

    • nondescripthandle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      7 days ago

      Journalist aren’t even critical of police press releases anymore, most simply print whatever they’re told verbatim. It may as well just be advertisement.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        17
        ·
        7 days ago

        I agree with you so strongly that I went ahead and updated my comment. The problem is general and out of control. Orwell said it best: “Journalism is printing something that someone does not want printed. Everything else is public relations.”

      • Komodo Rodeo@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 days ago

        The meat of the true issue right here. Journalism and investigative journalism aren’t just dead, their corpses has been feeding a palm tree like a pod of beached whales for decades. It’s a bizarre state of affairs to read news coverage and come out the other side less informed, without reading literal disinformation. It somehow seems so much worse that they’re not just off-target, but that they don’t even understand why or how they’re fucking it up.

    • William@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      I was going to say they should only be comparing them under the same driving areas, since I know they aren’t allowed in many areas.

      But you’re right, it’s even tighter than that.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        7 days ago

        These articles frustrate the shit out of me. They accept both the company’s own framing and its selectively-released data at face value. If you get to pick your own framing and selectively release the data that suits you, you can justify anything.

    • Anthony@buc.ci
      link
      fedilink
      arrow-up
      7
      arrow-down
      4
      ·
      7 days ago

      @theluddite@lemmy.ml @vegeta@lemmy.world
      to amplify the previous point, taps the sign as Joseph Weizenbaum turns over in his grave

      A computer can never be held accountable

      Therefore a computer must never make a management decision

      tl;dr A driverless car cannot possibly be “better” at driving than a human driver. The comparison is a category error and therefore nonsensical; it’s also a distraction from important questions of morality and justice. More below.

      Numerically, it may some day be the case that driverless cars have fewer wrecks than cars driven by people.(1) Even so, it will never be the case that when a driverless car hits and kills a child the moral situation will be the same as when a human driver hits and kills a child. In the former case the liability for the death would be absorbed into a vast system of amoral actors with no individuals standing out as responsible. In effect we’d amortize and therefore minimize death with such a structure, making it sociopathic by nature and thereby adding another dimension of injustice to every community where it’s deployed.(2) Obviously we’ve continually done exactly this kind of thing since the rise of modern technological life, but it’s been sociopathic every time and we all suffer for it despite rampant narratives about “progress” etc.

      It will also never be the case that a driverless car can exercise the judgment humans have to decide whether one risk is more acceptable than another, and then be held to account for the consequences of their choice. This matters.

      Please (re-re-)read Weizenbaum’s book if you don’t understand why I can state these things with such unqualified confidence.

      Basically, we all know damn well that whenever driverless cars show some kind of numerical superiority to human drivers (3) and become widespread, every time one kills, let alone injures, a person no one will be held to account for it. Companies are angling to indemnify themselves from such liability, and even if they accept some of it no one is going to prison on a manslaughter charge if a driverless car kills a person. At that point it’s much more likely to be treated as an unavoidable act of nature no matter how hard the victim’s loved ones reject that framing. How high a body count do our capitalist systems need to register before we all internalize this basic fact of how they operate and stop apologizing for it?

      (1) Pop quiz! Which seedy robber baron has been loudly claiming for decades now that full self driving is only a few years away, and depends on people believing in that fantasy for at least part of his fortune? We should all read Wrong Way by Joanne McNeil to see the more likely trajectory of “driverless” or “self-driving” cars.
      (2) Knowing this, it is irresponsible to put these vehicles on the road, or for people with decision-making power to allow them on the road, until this new form of risk is understood and accepted by the community. Otherwise you’re forcing a community to suffer a new form of risk without consent and without even a mitigation plan, let alone a plan to compensate or otherwise make them whole for their new form of loss.
      (3) Incidentally, quantifying aspects of life and then using the numbers, instead of human judgement, to make decisions was a favorite mission of eugenicists, who stridently pushed statistics as the “right” way to reason to further their eugenic causes. Long before Zuckerberg’s hot or not experiment turned into Facebook, eugenicist Francis Galton was creeping around the neighborhoods of London with a clicker hidden in his pocket counting the “attractive” women in each, to identify “good” and “bad” breeding and inform decisions about who was “deserving” of a good life and who was not. Old habits die hard.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Honestly I should just get that slide tattooed to my forehead next to a QR code to Weizenbaum’s book. It’d save me a lot of talking!

      • dogslayeggs@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 days ago

        So let me make sure I understand your argument. Because nobody can be held liable for one hypothetical death of a child when an accident happens with a self driving car, we should ban them so that hundreds of real children can be killed instead. Is that what you are saying?

        As far as I know of, Waymo has only been involved in one fatality. The Waymo was sitting still at a red light in traffic when a speeding SUV (according to reports going at extreme rate of speed) rammed it from behind into other cars. The SUV then continued into traffic where it struck more cars, eventually killing someone. That’s the only fatal accident Waymo has been involved in after 50 million miles of driving. But instead of making it safer for children, you would prefer more kids die just so you have someone to blame?

        • Anthony@buc.ci
          link
          fedilink
          arrow-up
          2
          ·
          7 days ago

          @dogslayeggs@lemmy.world

          So let me make sure I understand your argument. Because nobody can be held liable for one hypothetical death of a child when an accident happens with a self driving car, we should ban them so that hundreds of real children can be killed instead. Is that what you are saying?

          No, this strawman is obviously not my argument. It’s curious you’re asking whether you understand, and then opining afterwards, rather than waiting for the clarification you suggest you’re seeking. When someone responds to a no-brainer suggestion, grounded in skepticism but perfectly sensible nevertheless, with a strawman seemingly crafted to discredit it, one has to wonder if that someone is writing in good faith. Are you?

          For anyone who is reading in good faith: we’re clearly not talking about one hypothetical death, since more than one real death involving driverless car technology has already occurred, and there is no doubt there will be more in the future given the nature of conducting a several-ton hunk of metal across public roads at speed.

          It should go without saying that hypothetical auto wreck fatalities occurring prior to the deployment of technology are not the fault of everyone who delayed the deployment of that technology, meaning in particular that these hypothetical deaths do not justify hastening deployment. This is a false conflation regardless of how many times Marc Andreesen and his apostles preach variations of it.

          Finally “ban”, or any other policy prescription for that matter, appeared nowhere in my post. That’s the invention of this strawman’s author (you can judge for yourself what the purpose of such an invention might be). What I urge is honestly attending to the serious and deadly important moral and justice questions surrounding the deployment of this class of technology before it is fully unleashed on the world, not after. Unless one is so full up with the holy fervor of technoutopianism that one’s rationality has taken leave, this should read as an anodyne and reasonable suggestion.

          • dogslayeggs@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 days ago

            I was asking in good faith because the way you talk is not easily comprehensible. I can barely follow whatever argument you are trying to make. I think you are trying to say that we shouldn’t allow them on the road until we have fully decided who is at fault in an accident?

            Also, only one death has occurred so far involving driverless cars, which is where a speeding SUV rammed into a stopped driverless car and then the SUV continued on and hit 5 other cars where it killed someone. That’s it. The only death involved a driverless car sitting still, not moving, not doing anything… and it wasn’t even the car that hit the car in which the person died. So I would say it is hypothetical when talking about hypothetical deaths that are the fault of a driverless car.