The U.S. government’s road safety agency is again investigating Tesla’s “Full Self-Driving” system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian.
The National Highway Traffic Safety Administration says in documents that it opened the probe on Thursday with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog and airborne dust.
In addition to the pedestrian’s death, another crash involved an injury, the agency said.
Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes.”
Eyes can’t see in low visibility.
musk “we drive with our eyes, cameras are eyes. we dont need LiDAR”
FSD kills someone because of low visibility just like with eyes
musk reaction -
What pisses me off about this is that, in conditions of low visibility, the pedestrian can’t even hear the damned thing coming.
I hear electric cars all the time, they are not much quieter than an ice car. We don’t need to strap lawn mowers to our cars in the name of safety.
You can hear them, but manufacturers had to add external speakers to electric cars to make them louder.
https://en.wikipedia.org/wiki/Electric_vehicle_warning_sounds
I think they are a lot more quiet. I’ve turned around and seen a car 5 meter away from me, and been surprised. That never happens with fuel cars.
I think if you are young, maybe there isn’t a big difference since you have perfect hearing. But middle aged people lose quite a bit of that unfortunately.
I’m relatively young and it can still be difficult to hear them especially the ones without a fake engine sound. Add some city noise and they can be completely inaudible.
‘city noise’ you mean ICE car noise. We should be trying to reduce noise pollution not compete with it.
It’s not safe for cars to be totally silent when moving imo since I’d imagine it’s more likely to get run over.
if he was truthful: “the cost of adding lidar cuts in my profits”
Correction - Older Teslas had lidar, Musk demanded they be removed because they cut into his profits. Not a huge difference but it does show how much of a shitbag he is.
Honestly though, I’m a fucking idiot and even I can tell that Lidar might be needed for proper, safe FSD
You’d think “we drive with our eyes, cameras are eyes.” is an argument against only using cameras but that do I know.
How Can Cameras Be Real If Our Eyes Aren’t Real?
The cars used to have RADAR. But they got rid of that and even disabled it on older models when updating because they “only need cameras.”
Cameras and RADAR would have been good enough for most all conditions…
It’s worse than that, though. Our eyes are significantly better than cameras (with some exceptions at the high end) at adapting to varied lighting conditions than cameras are. Especially rapid changes.
Hard to credit without a source, modern cameras have way more dynamic range than the human eye.
Not in one exposure. Human eyes are much better with dealing with extremely high contrasts.
Cameras can be much more sensitive, but at the cost of overexposing brighter regions in an image.
They’re also pretty noisy in low light and generally take long exposures (a problem with a camera at high speeds) to get sufficient input to see anything in the dark. Especially if you aren’t spending thousands of dollars with massive sensors per camera.
He really is a fucking idiot. But so few people can actually call him out… So he just never gets put in his place.
Imagine your life with unlimited redos. That’s how he lives.
The whole “we drive with our eyes” thing is such bullshit. Humans are terrible drivers. Autonomous driving should be better than humans.
That goes for OpenPilot too. They actually openly advertise that their software makes the same mistakes as humans, as if it’s some sort of advancement. Like if I could plug Lidar into my brain, I totally would.
In five years guys!!
i’m sure he’ll claim that this is all politically motivated, and i really hope that someone says “yes it is. FAFO”.
Really fucking stupid that we as a society intentionally choose to fuck around and find out rather than find out before we fuck around.
Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions
They will have to look long and hard…
Tesla: Why would we need lidar? Just use visual cameras.
TeslaMusk: Why would we need lidar? Just use visual camerasFTFY
I wonder if they will now find the Emperor has no clothes.
This is why you can’t have an AI make decisions on activities that could kill someone. AI models can’t say “I don’t know”, every input is forced to be classified as something they’ve seen before, effectively hallucinating when the input is unknown.
I’m not very well versed in this but isn’t there a confidence value that some of these models are able to output?
All probabilistic models output a confidence value, and it’s very common and basic practice to gate downstream processes around that value. This person just doesn’t know what they’re talking about. Though, that puts them on about the same footing as Elono when it comes to AI/ML.
Right, which is why that marvelous confidence value got somebody ran over.
Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.
Yes, but confidence values are not magic. These values are calculated based on how familiar the current input is to a previous observed input. If the type of input is unfamiliar to the model, what do you think happens? Usually, there will be a category with a high enough confidence score so that it will be chosen as the correct one, while being wrong. Now, assuming you somehow manage to not get a favorable confidence score for any decision. What do you think happens in that case? I never encountered this, but there can only be 3 possible paths: 1) Choose a random value. Not good. 2) Do nothing. Not good. 3) Rerun the model with slightly newer data? Maybe helps, but in the case of driving a car, slightly newer data might be too late.
There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.
I purchased FSD when it was 8k. What a crock of shit. When I sold the car, that was this only gave the car value after 110k miles and it was only $1500 at most.
If anyone was somehow still thinking RoboTaxi is ever going to be a thing. Then no, it’s not, because of reasons like this.
It doesn’t have to not hit pedestrians. It just has to hit less pedestrians than the average human driver.
It needs to be way way better than ‘better than average’ if it’s ever going to be accepted by regulators and the public. Without better sensors I don’t believe it will ever make it. Waymo had the right idea here if you ask me.
But why is that the standard? Shouldn’t “equivalent to average” be the standard? Because if self-driving cars can be at least as safe as a human, they can be improved to be much safer, whereas humans won’t improve.
Humans absolutely improve.
I’d accept that if the makers of the self-driving cars can be tried for vehicular manslaughter the same way a human would be. Humans carry civil and criminal liability, and at the moment, the companies that produce these things only have nominal civil liability. If Musk can go to prison for his self-driving cars killing people the same way a regular driver would, I’d be willing to lower the standard.
Sure, but humans are only criminally liable if they fail the “reasonable person” standard (i.e. a “reasonable person” would have swerved out of the way, but you were distracted, therefore criminal negligence). So the court would need to prove that the makers of the self-driving system failed the “reasonable person” standard (i.e. a “reasonable person” would have done more testing in more scenarios before selling this product).
So yeah, I agree that we should make certain positions within companies criminally liable for criminal actions, including negligence.
I think the threshold for proving the “reasonable person” standard for companies should be extremely low. They are a complex organization that is supposed to have internal checks and reviews, so it should be very difficult for them to squirm out of liability. The C-suite should be first on the list for criminal liability so that they have a vested interest in ensuring that their products are actually safe.
Sure, the “reasonable person” would be a competitor who generally follows standard operating procedures. If they’re lagging behind the industry in safety or something, that’s evidence of criminal negligence.
And yes, the C-suite should absolutely be the first to look at, but the problem could very well come from someone in the middle trying to make their department look better than it is and lying to the C-suites. C-suites have a fiduciary responsibility to the shareholders, whereas their reports don’t, so they can have very different motivations.
The average human driver is tried and held accountable
That is the minimal outcomes for an automated safety feature to be an improvement over human drivers.
But if everyone else is using something you refused to that would have likely avoided someone’s death, while misnaming you feature to mislead customers, then you are in legal trouble.
When it comes to automation you need to be far better than humans because there will be a higher level of scrutiny. Kind of like how planes are massively safer than driving on average, but any incident where someone could have died gets a massive amount of attention.
Exactly. The current rate is 80 deaths per day in the US alone. Even if we had self-driving cars proven to be 10 times safer than human drivers, we’d still see 8 news articles a day about people dying because of them. Taking this as ‘proof’ that they’re not safe is setting an impossible standard and effectively advocating for 30,000 yearly deaths, as if it’s somehow better to be killed by a human than by a robot.
But they aren’t and likely never will be.
And how are we to correct for lack of safety then? With human drivers you obvious discourage dangerous driving through punishment. Who do you punish in a self driving car?
The problem with this way of thinking is that there are solutions to eliminate accidents even without eliminating self-driving cars. By dismissing the concern you are saying nothing more than it isn’t worth exploring the kinds of improvements that will save lives.
If you get killed by a robot, it simply lacks the human touch.
If you get killed by a robot, you can at least die knowing your death was the logical option and not a result of drunk driving, road rage, poor vehicle maintenance, panic, or any other of the dozens of ways humans are bad at decision-making.
Or the result of cost cutting…
It doesn’t even need to be logical, just statistically reasonable. You’re literally a statistic anytime you interact w/ any form of AI.
or a flipped comparison operator, or a “//TODO test code please remove”
“10 times safer than human drivers”, (except during specific visually difficult conditions which we knowingly can prevent but won’t because it’s 10 times safer than human drivers). In software, if we have replicable conditions that cause the program to fail, we fix those, even though the bug probably won’t kill anyone.
It does, actually. That’s why robotaxis and self-driving cars in general will never be a thing.
Society accepts that humans make mistakes, regardless of how careless they’re being at the time. Autonomous vehicles are not allowed the same latitude. A single pedestrian gets killed and we have to get them all off the road.
It’s bit reductive to put it in terms of a binary choice between an average human driver and full AI driver. I’d argue it has to hit less pedestrians than a human driver with the full suite of driver assists currently available to be viable.
Self-driving is purely a convenience factor for personal vehicles and purely an economic factor for taxis and other commercial use. If a human driver assisted by all of the sensing and AI tools available is the safest option, that should be the de facto standard.
Full Self Driving shipping
202520262027309844841e+156^ You are here
To be fair its marketed as full self driving, not full self no crashing
It sure crashed its full self
The worst way to die would be getting hit by a shitbox Tesla. RIP.
I mean I’ll take it over being burned alive or brutally eaten alive by a pack of ravenous wolves.
Neither of those are necessarily quicker or less painful than getting hit by the car.
For some real fun, try for all three at once!
I’ll take the wolves
Humans know to drive more carefully in low visibility, and/or to take actions to improve visibility. Muskboxes don’t.
Muskboxes
like that
I’m not so sure. Whenever there’s crappy weather conditions, I see a ton of accidents because so many people just assume they can drive at the posted speed limit safely. In fact, I tend to avoid the highway altogether for the first week or two of snow in my area because so many people get into accidents (the rest of the winter is generally fine).
So this is likely closer to what a human would do than not.
low visibility, including sun glare, fog and airborne dust
I also see a ton of accidents when the sun is in the sky or if it is dusty out. \s
Yup, especially at daylight savings time when the sun changes position in the sky abruptly.
Cameras are probably worse here, but they may be able to make up for it with parallel processing the poor data they get.
The question is, is Tesla FSD’s record better, worse, or about the same on average as a human driver under the same conditions? If it’s worse than the average human, it needs to be taken off the road. There are some accident statistics available, but you have to practically use a decoder ring to make sure you’re comparing like to like even when whoever’s providing the numbers has no incentive to fudge them. And I trust Tesla about as far as I could throw a Model 3.
On the other hand, the average human driver sucks too.
Yeah, I honestly don’t know. My point is merely that we should have the same standards for FSD vs human driving, at least initially, because they have a lot more potential for improvement than human drivers. If we set the bar too high, we’ll just delay safer transportation.
You can’t measure this, because it has drivers behind the wheel. Even if it did three “pedestrian-killing” mistakes every 10 miles, chances are the driver will catch every mistake per 10000 miles and not let it crash.
But on the other hand, if we were to measure every time the driver takes over the number would be artificially high - because we can’t predict the future and drivers are likely to be overcautious and take over even in circumstances that would have turned out OK.
The only way to do this IMO is by
- measuring every driver intervention
- only letting it be driverless and marketable as self-driving when it achieves a very low number of interventions ( < 1 per 10000 miles?)
- in the meantime, market it as “driver assist” and have the responsibility fall into the driver, and treat it like the “somewhat advanced” cruise control that it is.
They also decided to only use cameras and visual clues for driving instead of using radar, heat cameras or something like that as well.
It’s designed to be launched asap, not to be safe
I mean, that’s just good economics. I’m willing to bet someone at Tesla has done the calcs on how many people they can kill before it becomes unprofitable
The median driver sure, but the bottom couple percent never miss their exit and tend to do boneheaded shit like swerving into the next lane when there’s a stopped car at a crosswalk. >40,000 US fatalities in 2023. There are probably half a dozen fatalities in the US on any given day by the time the clock strikes 12:01AM on the west coast
Humans know to drive more carefully in low visibility…Muskboxes don’t.
They do, actually. It even displays a message on the screen about low visibility.
Does anyone else find this enraging ?
It’s a decade too late.