• Hart@beehaw.org
    link
    fedilink
    arrow-up
    70
    ·
    1 year ago

    Engineers, raise your hand if you’ve tried to do good work despite your management’s ‘support.’ Oh, look at all the hands going up!

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      27
      ·
      edit-2
      1 year ago

      This is true, but when safety is on the line it actually goes further than that. As an engineer you have an ethical duty to say no to making a product unsafe for end users or the general public.

      It doesn’t matter if you get fired, if your boss goes to the media to bitch about you, if your boss threatens to sue you, you as an engineer hold a position of public trust to keep the people that use your product safe. If you don’t respect that and take it seriously, well we see where oceangate ended up.

      • EthicalAI@beehaw.org
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        Yeah my boss has been going back and forth with me on this for months. Wanting to release unsecured products to the general public. I’m getting exhausted with him. I hold the keys and frequently I’ve told him no, and threatened to quit. Each time they just retreat back and hold a meeting how it will “stay on dev for now”. The features aren’t even feasible to release in the near future but I know they will force the issue. My resignation letter is on the table.

      • dark_stang@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        The number of times I’ve rejected something because of security flaws (usually database injection), only to see other engineers later approve and merge the pull request is infuriating. There seems to always be an engineer who is willing to make an unsafe product.

        • kitonthenet@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Yep, it’s a damn shame, but we’re gonna let them do that because we don’t want to be responsible for deaths or security flaws and ultimately there’s organizations and people out there who value that if our current jobs don’t

      • chrisn@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        That value is instilled in many types of engineering, but not as much in software engineering.

    • bfg9k@kbin.social
      link
      fedilink
      arrow-up
      20
      ·
      edit-2
      1 year ago

      Tale as old as time.

      Engineers: “This is possible but we will need to equip every car with an expensive sensor suite”

      Management: “So you’re saying we can just remove the sensors and figure it out with your engineering magic, you guys are really good at that, you got my iPhone connected to ICloud so you must be reeeally good with technology.”

      Engineers: “…”

      Management: “Also, anyone not up to this task is fired.”

    • !ozoned@lemmy.world@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Management ALWAYS knows what’s best! Obviously!

      Hence why they constantly come running for us to fix it when shit goes as we say it will.

  • Nougat@kbin.social
    link
    fedilink
    arrow-up
    28
    ·
    1 year ago

    tl;dr: Autonomous driving uses a whole host of multiple and different kinds of sensors. Musk said “NO, WE WILL ONLY USE VISION CAMERA SENSORS.” And that doesn’t work.

    Guess what? I have eyes; I can see. You know what I want an autonomous vehicle to be able to do? Receive sensory input that I can’t.

    • bfg9k@kbin.social
      link
      fedilink
      arrow-up
      19
      ·
      1 year ago

      We also use way more than just our eyes to navigate. We have accelerometers (ear canals), pressure sensors (touch), Doppler sensors (ears) to augment how we get around. It was a fools errand to try and figure everything out just with cameras.

    • EthicalAI@beehaw.org
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      What’s worse is it will be hard to reverse this decision. Tesla is a data and AI company compiling vision and driving data from drivers around the world. If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize. You basically start from scratch at least for the new sensors, and you fail to deliver a promise to old customers.

      • Metacortechs@lemmy.stellarvortex.com
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Sounds to me like they should full steam ahead with new sensors, they will never deliver on what they’ve promised with the tech they are using today.

        Old customers situation won’t change and it would only be better going forward.

      • Barry Zuckerkorn@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize.

        I don’t see why that would have to be the case if the new data is a complete superset of the old data. If all the same cameras are there, then the additional sensors and the data those sensors collect can actually help train the processing of the visual-only data, right?

    • kestrel7@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      How do we prove we’re not robots? Fucking select the picture with traffic lights or buses, right? How was this allowed.

    • Canadian Nomad@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This news is months old. Honestly agree with musk on this one. We are able to drive with 2(sometimes only 1)low resolution(sometimes out of focus, sometimes closed) cameras on a pivot inside the vehicle with further blindspots all around. Much of our rear situational awareness comes from 2/3 small warped mirrors strategically placed to enhance those 2 low resolution cameras on a pivot. Tesla has already reverted to add some radar back in… The lidar option sounds like dystopia waiting to happen (just imagine all streets filled with aftermarket invisible lasers from 3rd world counties, any one of them could blind you under unlucky circumstances). The best way forward is visual, and if you watch up to date test drives on YouTube you can see they are doing quite well with what they have.

  • Toxic_Tiger@beehaw.org
    link
    fedilink
    arrow-up
    23
    ·
    1 year ago

    But even if these consequences don’t come to pass, this information still paints Musk’s attitude towards public health and how he views his responsibility to his customers as far from golden.

    Why am I not surprised in the least?

  • ironsoap@lemmy.one
    link
    fedilink
    arrow-up
    23
    ·
    1 year ago

    According to the report, Musk overruled a significant number of Tesla engineers who warned him that switching to a visual-only system would be problematic and possibly unsafe due to its high risk of increasing the rate of accidents. His own team knew their systems weren’t up to the task, but Musk believed he knew better than the industry experts who helped propel Tesla to the forefront of autonomous technology and ploughed on with this egocentric, counterproductive plan. He even disabled sensors in older models so that pretty much the entire Tesla fleet went visual-only.

    Amazing, just amazing.

    • fear@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      He’s starting to sound like Elizabeth Holmes.

      Every expert in the field insists that my idea is impossible? They’re backing their assertions up by cold, hard facts?! I’ll show em!

      • keeb420@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        if you look back at the history of tesla theres lots of that where musk/tesla engineers actually succeeded. it sounds like that thinking finally bit him in the ass.

        • roofuskit@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          These people all start to fail horribly when they stop listening. They somehow convince themselves that they are the genius when their skills stopped at having money and listening to groups of actual experts.

  • Chup@feddit.de
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    It’s the radar, lidar, cameras only story that’s coming up every few months for the last years. A few years ago Tesla went cameras only to save money, assuming it would be good enough. Other manufacturers/cars have a higher certification for autonomous driving but they are also using more sensors than just cameras.

  • EthicalAI@beehaw.org
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Capitalism. Nothing worse than a CEO for a product to be honest. Being able to overrule engineers and workers is literally the problem with capitalism. A guy with ungodly money vs actually boots on the ground. Disgusting

    • !ozoned@lemmy.world@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      You mean one man with a sapphire spoon shoved up his ass from birth doesn’t know more than an army of folks that have studied their entire lives, experienced worlds of issues around it, and are living and breathing this stuff everyday for this exact challenge? HUH! Well today I learned! /s

      And when the lay offs come, who does it affect more? The billionaire douche bag? Or the people that warned him?

    • keeb420@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      it doesnt end at ceos. i can think of one prominent, and fairly recent, incident where a different automaker knew of a defect before the product launched, and overruled fixing it because it was cheaper to leave it be. and that directly led to people dying. yet gm cars are still sold around the world and most people have forgotten about the ignition incidents. afaik the ceo was never involved in that decision.

  • Ronno@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Everyone already knew at the time that this decision was doomed to fail. They now even doubled down to actively remove sensors from older models, to avoid the inputs interfering with the new updates. When it comes to automating and especially autonomous driving in combination with safety, one should want as much input as possible. I doubt visual can compute faster than radar/lidar, I think it was just a cost saving effort. Gladly, Mercedes and BMW show the way to autonomous driving and are allowed to actually start using the first versions on European highways.

    • CedarMadness@midwest.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      They now even doubled down to actively remove sensors from older models, to avoid the inputs interfering with the new updates.

      Yes, I bought FSD a long time ago and even though I’m owed a hardware 3 upgrade, I’ve yet to get it. If I stay on hardware 2.5, my radar will be stay active and they can’t do something even dumber like disable my parking sensors. I’ve driven vision-only cars and it’s really worse at least for the roads around here. The FSD alpha is still too nerve-wracking to use for me to even consider installing it.

  • magnetosphere @beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    If it weren’t for all the deaths and other negative impacts on consumers and the general public, I’d be glad this is happening to such an arrogant prick. I hope the DoJ throws the book at him.

  • Ertebolle@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Honestly, Tesla should lean into their recent successes with charger standards and shift to being a company that sells/licenses EV tech to other companies, much as Intel is transitioning from making their own chips to making other people’s. Let GM and Ford and Hyundai and VW whack each other over the head until they haven’t got any margins left, and focus on the aspects of the business that are more profitable than simply making cars.

  • burningmatches@feddit.uk
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Musk Overruled Tesla Engineers, And Now They Are In Serious Trouble

    The engineers are in serious trouble? Or Tesla?

    This headline would be clearer if it followed the convention of companies being singular:

    Musk overruled Tesla engineers, and now it’s in serious trouble

    • h_adl_ss@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Dude! This is the News, you can’t just write a clear and understandable headline, no one will click on the article! Amateurs… (/s if it wasn’t clear)

  • SirEDCaLot@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I’m not sure what kind of serious trouble they are actually in. I have spent most of today being driven around by my Tesla, and aside from the occasional badly handled intersection and unnecessary slowdown it’s doing fucking great. So I would Tell anyone who says Tesla is in serious trouble, just go drive the car. Actually use the FSD beta before you say that it’s useless. Because it’s not. It is already far better than anyone expected vision only driving to be, and every release brings more improvements. I’m not saying that is a Tesla fanboy. I’m saying that as a person who actually drives the car.

    • CamilleMellom@mander.xyz
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      The thing is working good enough most of the time is not enough. I haven’t driven a Tesla so I’m not speaking for their cars but I work in SLAM and while cameras are great for it, cameras on a fast car need to process fast and get good images. It’s a difficult requirement for camera only, so you will not be able to garante safety like other sensors would. In most scenarios, the situation is simple: e.g. a highway where you can track lines and cars and everything is predictable. The problem is the outliers when it’s suddenly not predictable: a lack of feature in crowded environments, a recognition pipeline that fails because the model detects something is not there or fail to detect something there… then you have no safeguards.

      Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

      It’s ok to build a system that is good « most of the time » if you don’t advertise it as a fully autonomous system, so people stay focus.

      • SirEDCaLot@lemmy.fmhy.ml
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        My point stands- drive the car.
        You’re 100% right with everything you say. It has to work 100% of the time. Good enough most of the time won’t get to L3-5 self driving.

        Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

        The question is not the camera, it’s what you do with the data that comes off the camera.
        The first few versions of camera-based autopilot sucked. They were notably inferior to their radar-based equivalents- that’s because the cameras were using neural network based image recognition on each camera. So it’d take a picture from one camera, say ‘that looks like a car and it looks like it’s about 20’ away’ and repeat this for each frame from each camera. That sorta worked okay most of the time but it got confused a lot. It would also ignore any image it couldn’t classify, which of course was no good because lots of ‘odd’ things can threaten the car. This setup would never get to L3 quality or reliability. It did tons of stupid shit all the time.

        What they do now is called occupancy networks. That is, video from ALL cameras is fed into one neural network that understands the geometry of the car and where the cameras are. Using multiple frames of video from multiple cameras at once, it then generates a 3d model of the world around the car and identifies objects in it like what is road and what is curb and sidewalk and other vehicles and pedestrians (and where they are moving and likely to move to), and that data is fed to a planner AI that decides things like where the car should accelerate/brake/turn.
        Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

        I drive a Tesla. And I’m telling you from experience- it DOES work. The latest betas of full self driving software are very very good. On the highway, the computer is a better driver than me in most situations. And on local roads- it navigates them near-perfectly, the only thing it sometimes has trouble with is figuring out when is it’s turn in an intersection (you have to push the gas pedal to force it to go).

        I’d say it’s easily at L3+ state for highway driving. Not there yet for local roads. But it gets better with every release.

        • CamilleMellom@mander.xyz
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          It’s an interesting discussion thanks!

          I know that it can be done :). It’s my direct field of research (localization and mapping of autonomous robots with a focus on building 3D model from camera images e.g NeRF related methods )what i was trying to say is that you cannot have high safety using just cameras. But I think we agree there :)

          I’ll be curious to know how they handle environment with a clear lack of depth information (highway roads), how they optimized the processing power (estimating depth is one thing but building a continuous 3D model is different), and the image blur when moving at high speed :). Sensor fusion between visual slam and LiDAR is not complex (since the LiDAR provide what you estimate with your neural occupancy grid anyway, what you get is a more accurate measurement) so on the technological side they don’t really gain much, mainly a gain for the cost.

          My guess is that they probably still do a lot of feature detection (lines and stuff) in the background and a lot of what you experience when you drive is improvement in depth estimation and feature detection on rgb images? But maybe not I’ll be really interested to read about it more :). Do you have the research paper that the Tesla algo relies on?

          Just to be clear, i have no doubt it works :). I have used similar system for mobile robots and I don’t see why it would not. But I’m also worried they it will lull people in a false sense of safety while the driver should stay alert.

          • SirEDCaLot@lemmy.fmhy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Don’t have the paper, my info comes mainly from various interviews with people involved in the thing. Elon of course, Andrej Karpathy is the other (he was in charge of their AI program for some time).

            They apparently used to use feature detection and object recognition in RGB images, then gave up on that (as generating coherent RGB images just adds latency and object recognition was too inflexible) and they’re now just going by raw photon count data from the sensor fed directly into the neural nets that generate the 3d model. Once trained this apparently can do some insane stuff like pull edge data out from below the noise floor.

            This may be of interest– This is also from 2 years ago, before Tesla switched to occupancy networks everywhere. I’d say that’s a pretty good equivalent of a LiDAR scan…

        • tony@l.bxy.sh
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

          That’s my problem, it is approximating LIDAR but it isn’t the same. I would say multiple sensor types is necessary for exactly the reason you suggested it isn’t - to get multiple forms of input and get consensus, or failing consensus fail-safe.

          I don’t doubt Tesla autopilot works well and it certainly seems to be an impressive feat of engineering, but can it be better?

          In our town we had a Tesla shoot through red traffic lights near our local school barely missing a child crossing the road. The driver was looking at their lap (presumably their phone). I looked online and apparently autopilot doesn’t work with traffic lights, but FSD does?

          It’s not specific to Tesla but people unaware of the limitations level 2, particularly when brands like Tesla give people the impression the car “drives itself” is unethical.

          My opinion is if that Tesla had extra sensors, even if the car is only in level 2 mode, it should be able to pick up that something is there and slow/stop. I want the extra sensors to cover the edge cases and give more confidence in the system.

          Would you still feel the same about Tesla if your car injured/killed someone or if someone you care about was injured/killed by a Tesla?

          IMHO these are not systems that we should be compromising to cut costs or because the CEO is too stubborn. If we can put extra sensors in and it objectively makes it safer why don’t we? Self driving cars are a luxury.

          Crazy hypothetical: I wonder how Tesla would cope with someone/something covered in Vantablack?

    • exscape@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      This kind of serious trouble (from the article):

      The Department of Justice is currently investigating Tesla for a series of accidents — some fatal — that occurred while their autonomous software was in use. In the DoJ’s eyes, Tesla’s marketing and communication departments sold their software as a fully autonomous system, which is far from the truth. As a result, some consumers used it as such, resulting in tragedy. The dates of many of these accidents transpired after Tesla went visual-only, meaning these cars were using the allegedly less capable software.

      Consequently, Tesla faces severe ramifications if the DoJ finds them guilty.

      And of course:

      The report even found that Musk rushed the release of FSD (Full Self-Driving) before it was ready and that, according to former Tesla employees, even today, the software isn’t safe for public road use. In fact, a former test operator went on record saying that the company is “nowhere close” to having a finished product.

      So even though it seems to work for you, the people who created it don’t seem to think it’s safe enough to use.

      • obviouspornalt@lemmynsfw.com
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        My neighborhood has roundabouts. A couple of times when there’s not any traffic around, I’ve let autopilot attempt to navigate them. It works, mostly, but it’s quite unnerving. AP wants to go through them ready faster than I would drive through them myself.

        • SirEDCaLot@lemmy.fmhy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          AP or FSD?
          AP is old and frankly kinda sucks at a lot of things.
          FSD Beta if anything I’ve found is too cautious on such things.