Guest essay by Eric Worrall
Citizen reporter Joel Johnson documenting some of his hair raising experiences in San Francisco robot taxicabs. Includes video.
Waymo self-driving robotaxi goes rogue with passenger inside, escapes support staff
We speak to man who experienced and recorded wild ride first hand
Thomas Claburn in San Francisco Mon 17 May 2021 // 20:51 UTC
A Waymo self-driving car got stuck several times, held up traffic intermittently, and departed unexpectedly when assistance arrived. The wayward autonomous vehicle was finally commandeered by a support driver.
Joel Johnson has recorded several dozen videos documenting his rides in Waymo robotaxis which he posts to his website and YouTube Channel.
…
Johnson is advised to remain seated with his seat belt fastened in case the car starts moving again, which it does: about four minutes later, the car decides to turn into the unblocked lefthand southbound land, only to swerve back into the right hand lane between two traffic cones after passing the “Keep Left” sign that directs drivers not to be in that lane.
“Oh, I don’t think it was supposed to do that,” Johnson said to the Waymo operator, still on the line. “…Oh now, it’s blocking the entire road.”
A few minutes later, the car reverses into the open left-hand lane.
“Okay, so we ‘re backing out,” said Johnson. “Very interesting.”
“So it backed out…,” the operator said.
“And then now it’s blocking the whole lane instead of half of it,” Johnson replied.
…
Read more: https://www.theregister.com/2021/05/17/waymo_robotaxi_malfunction/
The video;
Note the video contains a section of annoying corporate voiceover, you can skip forward a bit if you get tired of listening to Waymo’s excuses explanation.
This incident brings back memories of many years ago, when my mum tried to teach me to drive. She turned up unexpectedly to give me a lesson the day after a big night. It didn’t work out.
Lets just say that if my impression that autonomous cabs struggle to match my driving skills on the day of my first driving lesson is correct, I’m going to wait a few years before trusting my life to a robot driver.
“my impression that autonomous cabs struggle to match my driving skills on the day of my first driving lesson is correct”
Maybe there’s a point of diminishing returns? Carnegie Mellon had systems driving on Pennsylvania country roads thirty years ago. Today’s commercial efforts leverage CMI development, better sensors and processors, GPS and other mapping technologies, network communications, and yet still break down in ways that CMI labs experienced years ago.
I suspect autonomous AI needs a lot more work, computing power and theoretical advances before it can be safe.
The human brain’s processing power is estimated at around 10^15 logical operations per second. A good desktop computer maybe 10^10 operations per second.
A sizeable fraction of the human brain’s computing power is used to make sense of what our eyes see.
Why does the brain need all that fantastic computing power to make sense of a picture? Because it conducts a lot of trial runs. Ever see a coiled rope and, just for a moment, think it is a snake? Or see a leaf skitter and think it is a spider or rat or attacking dog? That is our brain fitting an incorrect solution to the visual information. If a solution appears which suggests danger, our brain flags it as a priority possibility, and we become conscious of it – the brain trades certainty for safety. It doesn’t matter if you jump out of the way of a coiled rope. It matters a great deal if you don’t leap back when a snake attempts to strike.
The point is our brain tries out all these possibilities, all the time. Every moment of our lives our brain uses its colossal computing power to fit millions of possibilities to sensory information, and sort through which is the most likely.
Autonomous cars simply don’t have the raw computer power to sort through all these possibilities. The programmers are very clever, they’ve pulled off all sorts of amazing software and hardware tricks to try to dumb down the world to the point the autonomous driver can make some sense of what they are seeing, but the problem with simplifying is sometimes you oversimplify, and when a human life hangs in the balance, in my opinion that just isn’t good enough.
Precisely analyzed, and I agree. The brain of a dog or cat can analyze in less than a second whether it is safe to jump from the couch to the floor or scratch the furniture if the critter owner is right there. No matter how big an AI system is, or how much “computing power” it has, it can’t replace the brain and memory power of a flock of geese, which can memorize a first-time winter migration route south in one trip.
Magpies have more ganglia in their brains than we mere mortal humans do. Even so, we can quickly memorize something, or recall instantly something that happened 50 years ago. Out of that, we create stories and histories and if you will, eventually legends and mythos,
There is not one computer on the planet that can match that. Despite the cuteness of C3PO, R2D2 and DATA, those are nothing but mobile computers.
That feature of the human brain which may never be replicated is the ability to instantaneously match a pattern and make a decision without computation. What you referred to as “recall instantly something that happened 50 years ago“. Brains store many things that may rarely be used, except for that moment when the experience matches a situation and causes a reaction. Regardless of the increases in cpu speed, essentially all software – AI included – is deterministic and crunches through to match data. Human brains are non-deterministic and intuition is the prime example of that.
You mean that whenever you see those videos of aeroplanes landing in severe weather conditions it is the human pilot that is in control not the automated landing system that would crash the plane. 737 Max or early demonstration flight of the fly-by-wire Airbus anyone?
I used to travel by airplane a fair bit. I never once saw a runway with a line of traffic cones, or a paving machine, or a flagger, or a road crash with police directing traffic.
No, modern airliners can land themselves. A human needs to set the right switches and push the right buttons to get it into the proper configuration, but once set, it can land itself.
However, an airplane landing is following two very precise radio signals for localizer and glideslope down a pre-determined path with presumably no other aircraft or terrain obstacles to a piece of concrete with nothing in the way.
A car is presented with a constantly changing environment with new obstacles appearing at any second.
Given a straight road and a guarantee of no traffic, engineers could certainly build a car that could travel right down the center of the lane with outstanding precision and extremely low failure rate. But real driving is an entirely different proposition.
Flying is predictable until it isn’t. Its the exceptions which test the brain power of the pilot.
And despite all that “colossal computing power” the human brain makes deadly mistakes and has “bugs”. So where does that leave these autonomous driving systems?
It leaves them in the hands of AI programmers whose human brains make deadly mistakes and have “bugs”.
I will never ride in an autonomous car, cab or bus. If it isn’t on rails, I’m driving myself.
I would hazard a guess they were not reliant on an internet, WiFi or bluetooth connectivity as these vehicles are today.
Patrick MJD – quite true. You are going kind of where I was, thinking about diminimshing returns. But I agree with Eric, who as usual summed it all up quite well.
Then there are the cases of Tesla’s using their autodrive function. The likely possibility is that liability lawyers will shut down “autonomous” vehicles with damage suits.
Generally when a human driven car has an accident, the blame falls on a human. There aren’t that many lawsuits against car companies (compared with the total number of accidents). For self-driving cars, there is the possibility that all accidents will involve a lawsuit against whoever programmed the self-driving software.
The car companies’ insurance bill would be huge and would raise the cost of the cars to the point where they would be unaffordable. I have visions of having to pay 15 years insurance up front included in the price of the car. Presumably the car owners’ annual premium would go down but that cost is spread out over the years.
My thought, too. The chance to sue a very deep-pocketed vehicle manufacturer for faults, or claimed faults, in software or equipment would make most of the plaintiff’s bar drool.
Isn’t that sort of the way they look at gun manufacturers as opposed to the “operators” of the guns?
Oh, they tried that in the Clinton Administration, and it was eventually banned by federal law.
Big difference there is that the guns don’t fire themselves.
That’s pretty much why light planes cost 20 times more now than they did circa 1970. About 70% of the price is the baked n liability insurance and this is with the 12 year limit on product liability.
Self-driving cars still aren’t as safe as human drivers. Since they’ve been seriously working on this for a while, don’t expect that to change any time soon.
i agree. i think they need to work more on this, self-driving cars, before roads get worse.
It’s like fusion power or flying cars in your garage. We’re only about about 30 years away.
LoL. But gee, sci fi has had these things forever: where’s KITT when you need him?
Yesterday a Tesla (2015 S) on auto-pilot smashed into a Washington State deputies car parked with it’s emergency lights on. Maybe the USA capitol building is ringed with fencing to keep auto-piloted cars from in-sur-recting our dear leaders.
For some reason first time I read “resurrecting”, not sure why… 😉
Have you looked at China Joe, lately?
Gropin’ Joe ain’t dead yet. Don’t rush things, Eric..
😜
Hey I saw the movie… 🙂
Same here, Eric! We could sure use some way to ‘resurrect’ some real leaders!
Where are all the AI’s that we keep getting promised?
Maybe we can brute force a chess program but anything that’s not specifically predictable is, so far, not autonomous.
The most powerful computers are starting to approach reasonable estimates for the human brain’s compute capability, so the main obstacle to general AI is holes in the theory.
For example, current AIs tend to suffer from catastrophic forgetting. If you teach an AI a skill, then teach the AI a second skill, in the process of learning the second skill it forgets the first skill. Not ideal for a candidate human level AI.
At the root, no matter how the logic chain is created, a computer only does “If A, then B, else C.” “B” and “C” can be more “ifs” – but the chain eventually must end (or no output happens). Binary all the way down.
The human brain is analog. “Something like A, might be B, but the signal isn’t all that strong, so it could still be C. Or D, or E, or F…”
Reminds me of Mr. Toads Wild ride.
“A motor car? A motor car? A motor car!!”
Ratty and mole were not amused.
…lol, I was thinking more Metrokab, Blade Runner.
Road Runner?
Hey, I loved that stuff. I was 7. What did I know? The closest I got to self-driven stuff was the comic books at the local drugstore.
Not ready for prime time and the only way to get there is experiencing prime time.
Sure. I’ve got no problem with San Francisco allowing AI experiments, and people using trying out those experiments of their own free will. Their self sacrifice will be the foundation of our progress.
Society has advanced only because some dumb@$$ was willing to try that new berry or mushroom they found.
Thus, humans learned what to eat and what not to eat.
Science advanced by standing on the shoulders of giants. Modern civilization has advanced by standing on the shoulders of dumb@$$es.
“I triple-dog dare you to eat that.”
No one, absolutely no one can back out of a triple-dog dare.
No one under the mental age of 13.
Ah… liberals…
If you can just wish hard enough, it will happen. It’s magic! Tinkerbelle will live if we all clap our hands in front of the T.V.
“That’s it. Come on. She can hear you, kids. Don’t let Tinkerbelle die.”
And if you keep clapping your hands, in 30 years, we’ll have self-driving cars. Tinkerbelle said so.
Autonomous flying cars that run on batteries charged by a fusion reactor.
I’m running out of 30-year periods to wait for this, alas.
More progress was made, I believe, by the slightly less dumb @$$es that observed the deer eating those new berries before trying one themselves. Not always a successful survival strategy, but better than 50/50.
Yup. If you see a bird or animal eating something new to you, ya gotta figure it’s worth a shot.
And yes, sometimes the critter eating that new thing has adapted an immunity to the toxins that other critters don’t have. You’re going to win some and lose some.
So the really smart ones triple-dog dared a dumb@ss to try it “because, see? that bird is eating it.”
I’ve got no problem with San Francisco people…trying out those experiments of their own free will.
I am a recovering pedestrian, and I object. Should we all be downrange of these unguided missiles? Think of the children…!
Although big car companies are braying brave words about autonomous vehicles (and venturing significant capital in the process) I stand with those who think this product will not survive the first wave of personal injury lawsuits.
I’m dumping my Ford stock. (There may be other reasons to do this….) :>)
“…I’ve got no problem with San Francisco allowing AI experiments…”
Eric, FYI, this Waymo taxi ride occurred in Phoenix, not San Francisco. Note that the ride started near the Pheasant Ridge Safeway onto McClintock, Warner Road, Price Road, W. Park etc. The palm trees should be a clue that we are not in SF.
[Actually I have never been to Phoenix, but was able to find these landmarks easily using Google Maps :]
This occurred in Chandler Arizona, not San Francisco, Calif.. I recognize the streets as I live in that area. I wasn’t aware that full autonomous driving was sanctioned yet by the city. I have only seen the Waymos with drivers…
“This occurred in Chandler Arizona, “
The starting point (Safeway Market and McClintock Rd) appears to be in Tempe, all part of greater metropolitan Phoenix. Apparently all of these little suburbs of Phoenix are separately incorporated and have their own mayors and city councils.
HAL 9000 has come to life!
“I don’t think you should turn left, Dave.”
😜
A little chill just ran up and down my spine. Just think if a voting machine did that.
As long as it doesn’t let you vote left, I guess that’s not all bad?
SIRI is the mother of the HAL9000. Just want to let you all know that. If you turn on SIRI on your phone, you are doomed.
If SIRI is the mother of HAL, is Alexa the mother of Skynet?
Computers have produced many improvements but also resulted in over-engineered cars to the extent that people no longer know how to drive.
I’m a relic of the analogue age, I enjoy the visceral experience of driving a manual direct-steering car without any newfangled driver assistance doodads.
Reliable robot cars are thankfully beyond my life expectancy.
I watched most of the video. That Waymo self-driving taxi is a sure cure for constipation and low blood pressure. As a matter of fact, my blood pressure is up, just from watching the video.
The AI is called the Joe Biden-mode. It can’t take questions, only knows how to turn Left, and heads for the nearest underground garage when reporters show up.
… and tries to run UP stairs when there is no need.
Why bother with AI cars? We need taxi driver jobs for all the people who used to work in industry but couldn’t “learn to code”.
I like the idea of taxis just driving off on their own…
I sometimes wonder how – if – these things will cope with the UK road system, which in some places was verifiably laid out during the Bronze age.
Last weekend I came across a set of temporary traffic lights where both ends had turned red and was able to use judgement to drive through… can’t see autonomous vehicle doing that.
Good point Griff.
There’s a long list of problems which involve value judgements, like if the autonomous vehicle is in a horrible situation, where the choice is to run into a group of school children or run into a bridge, should it save the kids, or should it save the occupant of the vehicle?
Would someone buy an autonomous vehicle which under some circumstances was programmed to let them die, even if in the horrible situation I described, a lot of humans would choose their own death over killing lots of kids?
Or say the famous miracle on the Hudson, in which the pilot Sully decided that the only possible landing place for his big airliner was the Hudson River?
Bit list of unresolved issues, quite apart from the issue they seem kind of defective.
But miracles DO happen, Eric. I actually had to give Griff an upvote!
I am sure there was a story of two people in a Tesla that was on auto-pilot that crashed and burst in to flames both, in California IIRC. Apparently, when the fire was put out and the wreckage was examined there was no body in the drivers seat. There was a body in the front passenger seat and one in the rear passenger seat.
Why is the passenger wearing a mask on his own in the taxi?
That thought occurred to me also. Of course, here in the UK you have to wear a mask when using any public transport, maybe it’s the same in the US. The rules probably don’t make an exception for driverless taxis.
I’m also pretty amazed that they have driverless taxis in the US. One thing that probably helps is the almost complete lack of pedestrians walking the pavements. It would be quite different in the UK, where many people still actually walk to get from A to B. I think it will be many years before the AI can safely cope with all the messy stuff you’d expect on UK minor roads – if ever.
Chris
If you go to the youtube comments, you get your answer – apparently Waymo requires masks even if you’re riding in the vehicle alone.
Not the best thing to be in on the way to a maternity hospital.
Are you locked in to these things ?
saveenergy: “Are you locked in to these things” Assuming that you are talking about the car (and not some strange device used on the poor woman in labor), it may not matter if it isn’t stopping. Plan on picking up the OB on the way, or even first. That way, they’ll be in the car with you.
Oh come on. It’s Waymo’ fun.
Future SD public cars will be told “Go park yourself” as soon as the passenger gets out, no matter right or wrong destination. Autonomous self-driven commuter rail is next, and a good reason to stay away from them.
Yes, people, this IS the wave off of the future, pun intended. If I am ever offered a self-driven vehicle, my response will be “Where is the OFF button for that?”
When it comes to autonomous ANYTHING, my first question is “how do you turn it off”.
Once we lose that ability – well, there’s movies about that.
This sounds like my last ride on a big roller coaster. It was a lot of fun but I never want to do it again. 😉
“departed unexpectedly when assistance arrived”
I feel for the poor guy stuck in this thing, but I couldn’t help but laugh at that.
Notice that the passenger is wearing a mask while totally isolated from others, but places himself in grave danger by riding in an autonomous taxi that clearly isn’t ready for prime time. He then laughs about the taxi’s incompetent driving. The taxi isn’t the only CLUELESS thing in the video.
in fairness to the passenger, he’s wearing a mask because the taxi company requires it to ride in their vehicles, even if alone.
But yeah, he’s so focused on the taxi that he doesn’t seem to realize how much the vehicle is possibly endangering others on the road. It’s a joke to him.
This car got stuck for over 4 minutes trying to make a right turn at a stop sign, then another 6 minutes because someone left a stray construction cone in the road. All on a bright sunny day with good visibility on wide, flat, mostly straight roads with little traffic.
Most human drivers, faced with a traffic cone in the road, would glance in the rear-view mirror and check for oncoming traffic, then simply drive around the cone when the coast was clear. If construction workers or equipment were present, the driver would naturally avoid them.
What would such a car do in more difficult driving conditions, such as rush hour in a congested city, or on curvy or hilly roads in rain, snow, or fog? Would it know that braking distances increase on slippery roads or going downhill? What would it do if it started skidding or spinning? Would it recognize a stop sign that might be hidden behind tree leaves during the warm season? Would it recognize a pedestrian crossing, and realize that people might step into the crosswalk? Would it recognize the color of a traffic light if there was blazing sunlight behind it? What would it do if it encountered a sign it was not programmed to read? What would it do for an unexpected obstacle in the road, such as an animal crossing or an object that may have fallen out of a truck? What if it encountered a situation that wasn’t programmed into its GPS, such as a detour, construction zone, flooded road, high crosswinds, or other temporary hazard?
This video clearly shows that self-driving cars are no match for even the most inexperienced or even reckless human drivers. They can be easily stalled by relatively benign situations such as a construction cone in an otherwise clear road, and fail to recognize dangerous situations (such as the self-driving car that ran over a pedestrian in Phoenix). They are much too dangerous to anything or anyone in their path to be allowed on public roads.
What to expect – it is likely running on bio-ethanol – what a party that was!
Time for the cops to breathalyze those autonomous cars!
And I can hardly wait for tonight’s midnight grand unveiling of the F-150 “Lightning”, which is estimated to retail for about $70k for the base model (about 2.4x the regular gas powered one). The battery alone could weigh close to a ton. When the “Lightning” strikes your wallet, there won’t be much left. Oh yeah, tradespeople will be lining up to buy that thing. Not.
City driving is one of the most complex things that humans do. In this case it sounds like a bewildering array of red cones. I would stick to automatic driving on rural Interstates for a while.
But I recall the first DARPA 100 mile automatic car desert race. Nobody got 12 miles. We have come a long, long way.
I will stick to driving my stick shift wherever and whenever I want to, and will use an automatic driving car only once, for my journey into eternity.
These cars will cause total gridlock. Main reason is that they are programmed to be very cautious, so default behavior is to stop or not proceed unless absolutely sure.
I do not think that AI can deal with matters of courtesy in a traffic situation as this requires Access to a means of communication outside the ability of AI to comprehend, such as a flashing light, a waved or pointing hand or even a smile in agreement and a thank you.
Traffic in the U.K. relies on this courtesy to keep things moving in many situations.
What worries me amongst other things, is meeting up with an AI vehicle with right of way firmly built into its system which prevents it doing the sensible thing to get things moving. The more there are the worse it will get.
I wish some one would look at the advances in Lidar, video, and other sensors and apply them to a much simpler system. Stoplights. Could we stop making cars idle at red lights when there are no cars visible in the crossing road? There are all kinds of traffic control optimizations possible while maintaining some minimum timing changes. When traffic is backing up in a cross street, increase the green light time for the cross street. This is possible without reducing safety. Image a system that is always trying to optimize traffic flow and synchronizing the lights to minimize idling time.