City officials in San Francisco aren’t big fans of GM’s Cruise self-driving taxis which roam areas of the city late at night. They recently filed a request to the California Public Utilities Commission to hold back Cruise’s requests to expand their service. The issues around that letter will be discussed in an upcoming article, but one particular issue has stood out — complaints from the fire department about events while Cruise cars have driven by two fire scenes.
In the first scene, in June of 2022, the SFFD claims a Cruise car didn’t stop at the fire scene, and drove over an operating fire hose, a violation of the California vehicle code and, according to officials, a risk to firefighters. This set up a situation on Jan 21 of this year where things got a bit more dramatic. The city states, “The Cruise AV entered the area of active firefighting and approached fire hoses on the ground. Firefighters on the scene made efforts to prevent the Cruise AV from driving over their hoses and were not able to do so until they shattered a front window of the Cruise AV.”
Sources at Cruise claim the vehicle on Jan 21 did detect the active fire scene, and followed its standard procedure, which is to try to pull over. However, it could not find a place to pull over in a short distance, and so entered its next plan which is a “safe stop” to enter what the regulations call a “minimal risk condition.” This can and does mean just stopping dead in the road, which is not an ideal situation. Cruise and other companies in this state dispatch a field staff member to go to the car and drive it out manually, which of course takes time. The timeline is not clear exactly when the window was broken. One report says the car was “inching forward” even as they stood in front of it, and that caused them to break the window. Cruise says their logs dispute that report, and the vehicle was stationary at the time the window was broken.
How should this go down?
The testing of uncrewed robotaxis on city streets is a necessary step to their deployment. And this testing is going to mean problems occur, and they are fixed — that is the point. If the testing creates significant risk, it may not be time for it, but absent an effort to greatly delay or ban the technology, this is going to happen.
Based on Cruise’s report, the SFFD acted rashly, breaking the window for now reason. The car was stopped and not going anywhere. On the other hand, a prior incident had given them reason to worry, and it may be that the car decided to stop just moments after they broke its window — it might have been just about to give up trying to pull over. Since that only took a short time, they may have been too eager, but this may be due to the incident running over the hose. The Cruise vehicle does detect any impacts on the vehicle including the breaking of a window
We can play out the following possible scenarios:
- In spite of Cruise’s assertions, the car moved forward with pedestrians directly ahead of it and needed stopping
- The car was on a different path, looking for a place to pull over, giving the impression that it was moving towards the firefighters and hose, and they broke its window just shortly after it stopped
- The car had already clearly stopped but firefighters were mistaken and concerned due to the past incident, and broke its window due to that concern
Minimum Risk Condition
Much of this conflict stems from the policy of going into a “minimum risk condition” which means stopping dead and waiting for rescue. Cruise, Waymo and other companies all employ remote operations teams for these situations. They can remotely look through the cameras and sensors of the car and give it strategic directions on what to do, including plotting a path for it to drive to get out. They don’t remotely drive it with a wheel — the car does the driving and the avoiding of obstacles along the path that the remote operator approves.
The concern is why this system didn’t work here, or in some other recent situations. Companies have not been entirely forthcoming as to why. One answer they have given is that in these early stages, they want to take the most conservative choice, and that choice is to freeze. You aren’t going to hurt anybody if you freeze, but you might block things. That is definitely not a good long term solution.
It seems in an ideal situation a remote operator should have been connected, and talking on a loudspeaker to the fire crew as soon as the car detected a fire situation with flashing lights and fire trucks and hoses.
Even more ideally, 911 and the SFFD should have digitally published to the vehicle companies that it was responding to a fire at that location, and Cruise and other companies should have received this signal long before the trucks got to the location of the fire, and already routed all vehicles away from that area.
We are not in that world yet, but we will be. In fact, it seems we are not in a world where other clear strategies might apply, such as backing up to get out of the situation, or bringing in remote assistance sooner. It seems we are not in the world because in these early deployments, the companies feel it is safer to be very conservative, and just resolve problems with a physical rescue team. That seems to also be what happened when a Waymo vehicle got stuck in heavy traffic due to a road closure earlier in the month on 19th avenue, also needing a physical rescue crew. Waymo has had far fewer of these incidents than Cruise but is not immune.
Are the companies wrong to be conservative in their early tests? Should testing wait until they have a better strategy than to play it super safe? It’s clear that sometimes there will be situations where the right choice is to play it that safe, but do we have an idea of how frequent that should be, that we would slow down testing until it gets over that bar?
In the next article, we’ll consider just what sort of teething pains should be expected and tolerated, and which should not. There are not always obvious answers. The most important question is just how much risk is occurring and is anybody getting hurt? All driving involves risk, and all testing involves risk. Zero risk is not happening and should not be waited for. If the testing incur similar risk to regular driving, or even modestly more, that seems reasonable. After all, we let student drivers out on the road with their erratic ways, and they slow traffic and cause problems all the time. We let newly minted drivers with their terrible safety records out on the roads as well, all in the hope of them learning how to be better drivers. This is a risk decision we have already taken.
These themes will be covered in an article appearing shortly.
- First responders should work together with transportation providers to build a system where they can publish locations of emergency activity, such as fires, disturbances and anything that might cause a street closure — though secret operations like police searches might be excluded. All services — navigation aps, taxis, TNCs, and of course robocars, should immediately move to route around those areas, and even the expected routes of the emergency vehicles if provided.
- Robotaxis should move to bring in remote operations staff more quickly whenever this fails and active emergency crews dealing with a situation are encountered. They should be able to both speak out a loudspeaker to workers, and give advice to the car to resolve any situation, including being able to back up, where safe to do so.
- The city and its departments should define how frequently minor issues can be tolerated, and grant developers some wiggle room unless the risks are too high and/or too frequent. Cars stalling or double parking and blocking roads is a super-frequent event with human drivers, and should be understood as such.
- If the fire crews broke the window of the Cruise well after it had stopped, and it wasn’t interfering with fire operations, the SFFD should apologize and pay for the repair. If the Cruise was interfering with fire operations, it should be cited for any violation of the law and pay appropriate fines. If it was just an issue of bad timing and misunderstanding, everybody should chill.
As it would be slow to deploy a system for emergency dispatchers to publish the locations of emergencies in digital form — they are not early adopters, even though this would also be highly useful to them — I expect that the many capable AI companies in this game — the most obvious one being Google — could make a tool that would listen to the existing feeds (which start as audio) and understand them and extract calls and addresses. They could start with only fire and ambulance to avoid any confidential police calls until police learn a magic word to put in confidential dispatch orders reliably. In addition these feeds would not go the public, but just to routing software, though a movie villain could look for changes in routes to determine when the police are on to him, if other clues are not enough. None of this information is strictly secret after the fact — anything with sirens and lights is very public — though they might like to imagine it is. If need be it could be given only to trusted companies and not made available in public navigation apps, though frankly, having Waze redirect most of the cars on the road away from emergency congestion scenes seems well worth any data exposure risk.
Canva Launches ‘Magic’ AI Tools For Its Design Software’s 125 Million Users
AI-Generated Images Of Donald Trump Getting Arrested Foreshadow A Flood Of Memes, Fake News
AI Will Revolutionize Your Inbox