I think AI driver give a good summary of some of issues with the Mark Rober showcase of LIDAR.
- Some of the bias Mark Rober friendships with the LIDAR supplier.
- Not using Tesla full self driving
- Being so nervous that he accidentally disables autopilot, twice.
- The one sided view in LIDAR, with not show examples of LIDAR have issues.
- Give a way forward to do the tests with FSD
- Mark Rober not giving the test his full attention, fun vs scientific process
The friendships with LIDAR suppliers aside (always a problem with these kinds of things),
Yup, LIDAR isn’t a silver bullet for every situation to do sensing. But it’s a damn sight better than pure cameras. And Musk would have known this if he was a good engineer. But hes not. Hes a spoiled, rich, apartheid-loving, racist asshole that thinks he’s a good programmer and engineer.
I know sadly that Tesla has lost a lot of respect due to Elon, especially over the last 4 years.
Engineer is in large part about balancing cost vs features. Yes the will be cases where LIDAR is better, but it comes at high cost. Think about it what is break over point the where the 0.1% edge case where LIDAR will do better, justify the cost.
If the test that Mark show to unexpected drivers I think many humans drivers will failed. Like spoting a dark stationary object in fog or heavy rain, is very difficult.
https://youtube.com/watch?v=9KyIWpAevNs
I think you’re underestimating how many situations a LIDAR system will be better than an all camera system. Its also a tradeoff in human lives. I’d rather it be slightly more expensive up front and not have kids die than cheap and kill kids.
And yes. Self driving should be better than humans. Cause humans suck as drivers. We have two cameras in one location in the car. Self driving must be better and make up for the limitations of humans. Cameras dont make up for them and thus are a terrible replacement for humans.
Why does no one mention why they said they dropped RADAR: Who do you trust when vision conflicts with RADAR? There were constant problems on this point. LIDAR is another redundant and possibly conflicting input.
Yup, but that’s going to be true in every environment. Conflicting or noisy signals are always going to be there when you have multiple sensors. Theres going to be conflicts between pure camera systems - what if a camera sensor goes buggy and starts putting out data that says there’s always a thing to the left?
More systems giving data to establish ground truth is better. Dont Boeing yourself into thinking that one sensor is good enough - that’s how you kill people.
Edit: you also know how they’re doing the depth detection with cameras? With AI. You know, that thing that we keep having troubles hallucinating data with. So the data it’s getting from the depth subsystem isnt ground truth, it’s significantly worse and could be completely wrong.
No, the camera data is combined into one visual “truth” I would bet. There was a fatal accident where the radar didn’t see a truck because it shot under it. At the time the radar was more trusted than the camera. The only way to solve that was to only have one sensor, and radar could never be the sole input.
You clearly have no clue what you’re talking about.
I think when you’ve gone ad homonym then I won