While many social media users are blaming the pedestrian for reportedly crossing against the light, the incident highlights the challenge autonomous driving faces in complex situations.
i recommend trying https://www.moralmachine.net/ and answering 13 questions to get some bigger picture. it will take you same 20 minutes or so.
you may find out that the problem is not as simple as 4 word soundbite.
In this week’s Science magazine, a group of computer scientists and psychologists explain how they conducted six online surveys of United States residents last year between June and November that asked people how they believed autonomous vehicles should behave. The researchers found that respondents generally thought self-driving cars should be programmed to make decisions for the greatest good.
Sort of. Through a series of quizzes that present unpalatable options that amount to saving or sacrificing yourself — and the lives of fellow passengers who may be family members — to spare others, the researchers, not surprisingly, found that people would rather stay alive.
Is every scenario on that site a case of brake failure? As a presumably electric vehicle it should be able to use regenerative breaking to stop or slow, or even rub against the guardrails in the side in each instance I saw
There’s also no accounting for probabilities or magnitude of harm, any attempt to warm anyone, or the plethora of bad decisions required to put the car going what must be highway speeds down a city stroad with a sudden, undetectable complete brake system failure.
This “experiment” is pure, unadulterated propaganda.
Oh, and that’s not even accounting for the intersection of this concept and negative externalities. If you’re picking an “AI” driving system for your car, do you pick the socially responsible one, or the one that prioritizes your well-being as the owner? What choice do you think most people pick in this instance?
Interesting link, thanks. I find this example pretty dumb though. There is a pedestrian crossing street on zebra crossing. Car should, oh I don’t know, stop?
Nevermind, read the description, car has a break problem. In that case try to cause least damage like any normal driver would.
Can you swerve without hitting a person? Then swerve, else stay. This means that the car will act predictable and in the long run that is safer for everyone.
can you not enter the road in front of incoming vehicle while ignoring the red light? if you can, then don’t. that means that pedestrians will act predictably and in the long run it will be safer for everyone.
The car should be programmed to self-destruct or take out the passengers always. This is the only way it can counter its self-serving bias or conflict of interests. The bonus is that there are fewer deadly machines on the face of the planet and fewer people interested in collateral damage.
Teaching robots to do “collateral damage” would be an excellent path to the Terminator universe.
Make this upfront and clear for all users of these “robotaxis”.
Now the moral conflict becomes very clear: profit vs life. Choose.
First off, ignoring the pitfalls of AI:
There is the issue at the core of the Trolley problem. Do you preserve the life of a loved one or several strangers?
This translates to: if you know the options when you’re driving are:
Drive over a cliff / into a semi / other guaranteed lethal thing for you and everyone in the car.
Hit a stranger but you won’t die.
What do you choose as a person?
Then, we have the issue of how to program a self diving car on that same problem. Does it value all life equally, or is it weighted to save the life of the immediate customer over all others?
Lastly, and really the likely core problem, is that modern AI aren’t capable of full self driving, and the current core architecture will always have a knowledge gap, regardless of the size of the model. They can, 99% of the time, only do things that are in their data models. So if they don’t recognize a human or obstacle, in all of the myriad forms we can take and move as, they will ignore it. The remaining 1% is hallucinations that end up being randomly beneficial. But, particularly for driving, if it’s not in the model they can’t do it.
We are not talking about a “what if” situation where it has to make a moral choice. We aren’t talking about a car that decided to hit a person instead of a crowd. Unless this vehicle had no brakes, it doesn’t matter.
It’s a simple “if person, then stop” not “if person, stop unless the light is green”
A normal, rational human doesn’t need a complex algorithm to decide to stop if little Stacy runs into the road after a ball at a zebra/crosswalk/intersection.
The ONLY consideration is “did they have enough time/space to avoid hitting the person”
A normal rational person does have a complex algorithm for stopping in that situation. Trick is that the calculation is subconscious, so we don’t think it is complex.
Hell even just recognizing a human is so complex we have problems with it. It’s why we can see faces in inanimate objects, and also why the uncanny valley is a thing.
I agree that stopping for people is of the utmost importance. Cars exist for transportation, and roads exist to move people, not cars. The problem is that from a software pov, ensuring you can define a person 100% of the time is still a post- doctorate research level issue. Self driving cars are not ready for open use yet, and anyone saying they are is either delusional or lying.
Whether or not to run over the pedestrian is a pretty complex situation.
what was the social credit score of the pedestrian?
To be fair, you have to have a pretty high IQ to run over a pedestrian
Right?
I saw “in a complex situation” and thought “what’s complex? Person in road = stop”
i recommend trying https://www.moralmachine.net/ and answering 13 questions to get some bigger picture. it will take you same 20 minutes or so.
you may find out that the problem is not as simple as 4 word soundbite.
In this week’s Science magazine, a group of computer scientists and psychologists explain how they conducted six online surveys of United States residents last year between June and November that asked people how they believed autonomous vehicles should behave. The researchers found that respondents generally thought self-driving cars should be programmed to make decisions for the greatest good.
Sort of. Through a series of quizzes that present unpalatable options that amount to saving or sacrificing yourself — and the lives of fellow passengers who may be family members — to spare others, the researchers, not surprisingly, found that people would rather stay alive.
https://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html
same link: https://archive.is/osWB7
Is every scenario on that site a case of brake failure? As a presumably electric vehicle it should be able to use regenerative breaking to stop or slow, or even rub against the guardrails in the side in each instance I saw
There’s also no accounting for probabilities or magnitude of harm, any attempt to warm anyone, or the plethora of bad decisions required to put the car going what must be highway speeds down a city stroad with a sudden, undetectable complete brake system failure.
This “experiment” is pure, unadulterated propaganda.
Oh, and that’s not even accounting for the intersection of this concept and negative externalities. If you’re picking an “AI” driving system for your car, do you pick the socially responsible one, or the one that prioritizes your well-being as the owner? What choice do you think most people pick in this instance?
Interesting link, thanks. I find this example pretty dumb though. There is a pedestrian crossing street on zebra crossing. Car should, oh I don’t know, stop?
Nevermind, read the description, car has a break problem. In that case try to cause least damage like any normal driver would.
Can you swerve without hitting a person? Then swerve, else stay. This means that the car will act predictable and in the long run that is safer for everyone.
can you not enter the road in front of incoming vehicle while ignoring the red light? if you can, then don’t. that means that pedestrians will act predictably and in the long run it will be safer for everyone.
The car should be programmed to self-destruct or take out the passengers always. This is the only way it can counter its self-serving bias or conflict of interests. The bonus is that there are fewer deadly machines on the face of the planet and fewer people interested in collateral damage.
Teaching robots to do “collateral damage” would be an excellent path to the Terminator universe.
Make this upfront and clear for all users of these “robotaxis”.
Now the moral conflict becomes very clear: profit vs life. Choose.
interesting idea. do you think there is big market for such product? 😆
Well yes and no.
First off, ignoring the pitfalls of AI:
There is the issue at the core of the Trolley problem. Do you preserve the life of a loved one or several strangers?
This translates to: if you know the options when you’re driving are:
What do you choose as a person?
Then, we have the issue of how to program a self diving car on that same problem. Does it value all life equally, or is it weighted to save the life of the immediate customer over all others?
Lastly, and really the likely core problem, is that modern AI aren’t capable of full self driving, and the current core architecture will always have a knowledge gap, regardless of the size of the model. They can, 99% of the time, only do things that are in their data models. So if they don’t recognize a human or obstacle, in all of the myriad forms we can take and move as, they will ignore it. The remaining 1% is hallucinations that end up being randomly beneficial. But, particularly for driving, if it’s not in the model they can’t do it.
We are not talking about a “what if” situation where it has to make a moral choice. We aren’t talking about a car that decided to hit a person instead of a crowd. Unless this vehicle had no brakes, it doesn’t matter.
It’s a simple “if person, then stop” not “if person, stop unless the light is green”
A normal, rational human doesn’t need a complex algorithm to decide to stop if little Stacy runs into the road after a ball at a zebra/crosswalk/intersection.
The ONLY consideration is “did they have enough time/space to avoid hitting the person”
The problem is:
Define person.
A normal rational person does have a complex algorithm for stopping in that situation. Trick is that the calculation is subconscious, so we don’t think it is complex.
Hell even just recognizing a human is so complex we have problems with it. It’s why we can see faces in inanimate objects, and also why the uncanny valley is a thing.
I agree that stopping for people is of the utmost importance. Cars exist for transportation, and roads exist to move people, not cars. The problem is that from a software pov, ensuring you can define a person 100% of the time is still a post- doctorate research level issue. Self driving cars are not ready for open use yet, and anyone saying they are is either delusional or lying.