• CamilleMellom@mander.xyz
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    The thing is working good enough most of the time is not enough. I haven’t driven a Tesla so I’m not speaking for their cars but I work in SLAM and while cameras are great for it, cameras on a fast car need to process fast and get good images. It’s a difficult requirement for camera only, so you will not be able to garante safety like other sensors would. In most scenarios, the situation is simple: e.g. a highway where you can track lines and cars and everything is predictable. The problem is the outliers when it’s suddenly not predictable: a lack of feature in crowded environments, a recognition pipeline that fails because the model detects something is not there or fail to detect something there… then you have no safeguards.

    Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

    It’s ok to build a system that is good « most of the time » if you don’t advertise it as a fully autonomous system, so people stay focus.

    • SirEDCaLot@lemmy.fmhy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      My point stands- drive the car.
      You’re 100% right with everything you say. It has to work 100% of the time. Good enough most of the time won’t get to L3-5 self driving.

      Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

      The question is not the camera, it’s what you do with the data that comes off the camera.
      The first few versions of camera-based autopilot sucked. They were notably inferior to their radar-based equivalents- that’s because the cameras were using neural network based image recognition on each camera. So it’d take a picture from one camera, say ‘that looks like a car and it looks like it’s about 20’ away’ and repeat this for each frame from each camera. That sorta worked okay most of the time but it got confused a lot. It would also ignore any image it couldn’t classify, which of course was no good because lots of ‘odd’ things can threaten the car. This setup would never get to L3 quality or reliability. It did tons of stupid shit all the time.

      What they do now is called occupancy networks. That is, video from ALL cameras is fed into one neural network that understands the geometry of the car and where the cameras are. Using multiple frames of video from multiple cameras at once, it then generates a 3d model of the world around the car and identifies objects in it like what is road and what is curb and sidewalk and other vehicles and pedestrians (and where they are moving and likely to move to), and that data is fed to a planner AI that decides things like where the car should accelerate/brake/turn.
      Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

      I drive a Tesla. And I’m telling you from experience- it DOES work. The latest betas of full self driving software are very very good. On the highway, the computer is a better driver than me in most situations. And on local roads- it navigates them near-perfectly, the only thing it sometimes has trouble with is figuring out when is it’s turn in an intersection (you have to push the gas pedal to force it to go).

      I’d say it’s easily at L3+ state for highway driving. Not there yet for local roads. But it gets better with every release.

      • CamilleMellom@mander.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It’s an interesting discussion thanks!

        I know that it can be done :). It’s my direct field of research (localization and mapping of autonomous robots with a focus on building 3D model from camera images e.g NeRF related methods )what i was trying to say is that you cannot have high safety using just cameras. But I think we agree there :)

        I’ll be curious to know how they handle environment with a clear lack of depth information (highway roads), how they optimized the processing power (estimating depth is one thing but building a continuous 3D model is different), and the image blur when moving at high speed :). Sensor fusion between visual slam and LiDAR is not complex (since the LiDAR provide what you estimate with your neural occupancy grid anyway, what you get is a more accurate measurement) so on the technological side they don’t really gain much, mainly a gain for the cost.

        My guess is that they probably still do a lot of feature detection (lines and stuff) in the background and a lot of what you experience when you drive is improvement in depth estimation and feature detection on rgb images? But maybe not I’ll be really interested to read about it more :). Do you have the research paper that the Tesla algo relies on?

        Just to be clear, i have no doubt it works :). I have used similar system for mobile robots and I don’t see why it would not. But I’m also worried they it will lull people in a false sense of safety while the driver should stay alert.

        • SirEDCaLot@lemmy.fmhy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Don’t have the paper, my info comes mainly from various interviews with people involved in the thing. Elon of course, Andrej Karpathy is the other (he was in charge of their AI program for some time).

          They apparently used to use feature detection and object recognition in RGB images, then gave up on that (as generating coherent RGB images just adds latency and object recognition was too inflexible) and they’re now just going by raw photon count data from the sensor fed directly into the neural nets that generate the 3d model. Once trained this apparently can do some insane stuff like pull edge data out from below the noise floor.

          This may be of interest– This is also from 2 years ago, before Tesla switched to occupancy networks everywhere. I’d say that’s a pretty good equivalent of a LiDAR scan…

      • tony@l.bxy.sh
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

        That’s my problem, it is approximating LIDAR but it isn’t the same. I would say multiple sensor types is necessary for exactly the reason you suggested it isn’t - to get multiple forms of input and get consensus, or failing consensus fail-safe.

        I don’t doubt Tesla autopilot works well and it certainly seems to be an impressive feat of engineering, but can it be better?

        In our town we had a Tesla shoot through red traffic lights near our local school barely missing a child crossing the road. The driver was looking at their lap (presumably their phone). I looked online and apparently autopilot doesn’t work with traffic lights, but FSD does?

        It’s not specific to Tesla but people unaware of the limitations level 2, particularly when brands like Tesla give people the impression the car “drives itself” is unethical.

        My opinion is if that Tesla had extra sensors, even if the car is only in level 2 mode, it should be able to pick up that something is there and slow/stop. I want the extra sensors to cover the edge cases and give more confidence in the system.

        Would you still feel the same about Tesla if your car injured/killed someone or if someone you care about was injured/killed by a Tesla?

        IMHO these are not systems that we should be compromising to cut costs or because the CEO is too stubborn. If we can put extra sensors in and it objectively makes it safer why don’t we? Self driving cars are a luxury.

        Crazy hypothetical: I wonder how Tesla would cope with someone/something covered in Vantablack?