We all know that you are supposed to make sure that you don’t end up with the proverbial hot potato.
The hot potato gambit appears to trace its roots to at least the late 1800s when a parlor game involving lit candles kind of got the ball rolling. People would typically be seated in a series of wooden chairs that were relatively adjacent to each other and play an altogether rousing game in that era. A lit candle would be handed from person to person by the players and represented what we, later on, opted to phrase as handing over a hot potato.
It was customary that each person would be required to speak aloud a popular rhyme before they could pass along the progressively burning candle. The rhyme apparently went like this:
“Jack’s alive and likely to live;
If he dies in your hand, you’ve a forfeit to give.”
This rhyming recital would presumably allow the candle a wee bit of time to continue burning down toward its final conclusion. Whoever got stuck with the candle in their possession upon the natural extinguishment at the end was the person that lost the match (pun!).
Per the words indicated in the rhyme, the loser had to pay the “forfeit” and thus usually had to exit any further rounds of the game. This might then be combined with what we today consider as everyday musical chairs, such that the person who has lost the round would no longer participate in subsequent rounds (as though the music stopped and they were unable to garner an available seat). Ultimately, there would just be two people leftover that passed the lit candle and said the rhyme, until a final winner was determined in the final extinguishment.
You might be wondering why we no longer play this game with a lit candle, and why we instead typically refer to this as a hot potato rather than depicting this as a “lit candle” scheme. Researchers have come up with lots of theories on how this gradually transpired. History seems cloudy and undecided about how things evolved in this matter. I suppose we can be relieved that lit candles aren’t commonly being used like this since the chances of something going palpably awry would seem abundantly worrisome (someone drops the candle and starts a fire, or someone gets burned by the candle when being handed it from another player, etc.).
In terms of the hot potato as a potential substitute for the lit candle, you could generally argue that the potato is going to be somewhat safer overall. No open flame. No melting wax. The potential basis for using potatoes in this context is that they are known to readily retain heat once they have been warmed up. You can pass the potato around and it will remain hot for a while. One supposes that deciding when the hot potato is no longer hot and instead is rated as cold would be a heatedly debatable proposition.
Of course, the notion of a proverbial hot potato is more of a standalone consideration these days. Anything that is rated or ranked as a hot potato is usually of an earnestly get-rid-of quality. You don’t want to be holding a hot potato. You want to make sure it goes someplace else. To some degree, you might not be overly bothered about where it goes, simply that it is no longer in your possession.
You would seem quite callous to potentially hand a hot potato to a dear friend or similar acquaintance. This would seem entirely out of sorts. Perhaps find someone else or someplace else to put that hot potato, if you can do so. A desperate move might be to force the hot potato onto an affable colleague, but this hopefully is only done as a last resort.
The other side of that coin is that you might delight in handing a hot potato to someone you don’t like or whom you are seeking revenge upon. Sure, a hot potato can be nearly gloriously handy if you are aiming to undercut a person that has treated you poorly. Let them figure out what to do about the hot potato. Good riddance to the potato and worst of luck to the person you’ve tagged it with.
In a hot potato scenario involving just two people, there is the possibility of a rapid back-and-forth contention regarding which person is holding the unsavory and unwelcomed item. For example, I hand over the hot potato to you, and you hurriedly hand it back to me. Assuming that we don’t need to announce a nursery rhyme between each handoff, we can pretty much just pass along the potato as fast as our arms allow us to do so.
You might be curious as to why I have opted to do a deep dive into the revered and oft-cited hot potato.
Turns out that the hot potato guise is increasingly being used in the field of Artificial Intelligence (AI).
Most people know nothing about it. They have never heard of it. They are completely unaware of what it is. Even many AI developers aren’t cognizant of the matter. Nonetheless, it exists and seems to be getting used in really questionable settings, especially instances involving life-or-death circumstances.
I refer to this as the AI Hot Potato Syndrome.
There are lots of serious repercussions underlying this syndrome and we need to make sure that we put on our AI Ethics thinking caps and consider what ought to be done. There are sobering ethical considerations. There are bound to be notable legal implications too (which haven’t yet reached societal visibility, though I predict they soon will). For my ongoing and extensive coverage of AI Ethics, Ethical AI, and Legal AI issues, see the link here and the link here, just to name a few.
Let’s unpack the AI Hot Potato Syndrome.
Imagine an AI system that is working jointly with a human. The AI and the human are passing control of some underway activity such that at times the human is in control while other times the AI is in control. This might at first be done in a shall we say well-mannered or reasonable way. For various reasons, which we’ll get into momentarily, the AI might computationally ascertain that the control needs to hurriedly be passed over to the human.
This is the hot potato that comes to endangering life in the real world rather than merely serving as an instructive child’s game.
The problem with a hurried passing of control from the AI to the human is that this can be done in a reasonable fashion or can be accomplished in a rather unreasonable way. If the human is not particularly expecting the handover, this is likely a problem. If the human is generally okay with the passing of control, the circumstances underlying the handover can be daunting when the human is given insufficient time or insufficient awareness of why the control is being force-fed into their human hands.
We will explore examples of how this can produce life-or-death peril for the human and possibly other nearby humans. It is serious stuff. Period, full stop.
Before getting into some more of the meaty facets of the wild and woolly considerations underlying the AI Hot Potato Syndrome, let’s lay out some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).
You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.
One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).
In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.
First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.
For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:
- Transparency: In principle, AI systems must be explainable
- Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
- Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
- Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
- Reliability: AI systems must be able to work reliably
- Security and privacy: AI systems must work securely and respect the privacy of users.
As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:
- Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
- Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:
- Justice & Fairness
- Freedom & Autonomy
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Let’s also make sure we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
Let’s keep things more down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
Let’s return to our focus on the hot potato and its potentially disastrous use in AI. There is also a fiendishness that can lurk within the hot potato ploy too.
As a quick recap about the AI manifestation of the hot potato gambit:
- AI and a human-in-the-loop are working jointly on a given task
- AI has control some of the time
- The human-in-the-loop has control some of the time
- There is some form of handoff protocol between the AI and the human
- The handoff might be highly visible, or it might be subtle and almost hidden
- This is all usually within a real-time context (something actively is underway)
The primary focus herein is when the handoff is essentially a hot potato and the AI opts to suddenly hand control over to the human. Please note that I will also later on herein cover the other facet, namely the human handing control over to the AI as a hot potato.
First, consider what can happen when the AI does a hot potato handoff to a human-in-the-loop.
I am going to refer to the human as the human-in-the-loop because I am saying that the human is already part and parcel of the working underway activity. We could have other scenarios whereby a human that wasn’t especially involved in the activity, perhaps a stranger to the whole matter, gets handed the hot potato by the AI, so do keep in mind that other flavors of this milieu do exist.
If I was handing you a hot potato and wanted to do so in a reasonable manner, perhaps I would alert you that I am going to hand things over to you. Furthermore, I would try to do this if I genuinely believed that you possessing the hot potato was better overall than my having it. I would mentally calculate whether you should have it or whether I should continue with it.
Envision a basketball game. You and I are on the same team. We are hopefully working together to try and win the game. There are just a few seconds left on the clock and we need desperately to score otherwise we will lose the game. I get into position to take the last shot. Should I do so, or should I pass the ball to you and have you take the last shot?
If I am a better basketball player and have a greater chance of sinking the shot, I probably should keep the basketball and try to make the shot. If you are a better basketball player than me, I probably should pass the ball to you and let you take the shot. Other considerations come to the fore such as which of us is in a better position on the court to take the shot, plus whether one of us is exhausted since the game is nearly over and might be worn out and not up-to-par on their shooting. Etc.
With all those factors in the midst of the harried moment, I need to decide whether to keep the ball or pass it along to you.
Keenly realize that in this scenario the clock is crucial. You and I are both confronted with an extremely timely response. The whole game is now on the line. Once the clock runs out, we either have won because one of us made the shot, or we have lost since we didn’t sink it. I could maybe be the hero if I sink the basket. Or you could be the hero if I pass the ball to you and you sink it.
There is the goat side or downsides of this too. If I keep the ball and miss the shot, everyone might accuse me of being the goat or letting the entire team down. On the other hand, if I pass the ball to you and you miss the shot, well, you become the goat. This might be entirely unfair to you in that I forced you into being the last shooter and taking the last shot.
You would definitely know that I put you into that off-putting position. And though everyone could see me do this, they are bound to only concentrate on the last person that had the ball. I would possibly skate free. No one would remember that I passed you the ball at the last moment. They would only remember that you had the ball and lost the game because you didn’t make the shot.
Okay, so I pass the ball over to you.
Why did I do so?
There is no easy way to determine this.
My true intentions might be that I didn’t want to get stuck being the goat, and so I opted to put all the pressure onto you. When asked why I passed the ball, I could claim that I did so because I thought you are a better shooter than me (but, let’s pretend that I don’t believe that at all). Or I thought that you were in a better position than I was (let’s pretend that I didn’t think this either). Nobody would ever know that I was actually just trying to avoid getting stuck with the hot potato.
From the outside view of things, no one could readily discern my true rationale for passing the ball to you. Maybe I innocently did so because I believed you were the better player. That’s one angle. Perhaps I did so because I didn’t want everyone to call me a loser for possibly missing the shot, thus I got the ball to you and figured it was a huge relief for me. Whether I genuinely cared about you is an entirely different matter.
We are now able to add some further details to the AI-related hot potato:
- The AI opts to give control to the human-in-the-loop at the last moment
- The last moment might already be far beyond any human-viable action
- The human-in-the-loop has control but somewhat falsely so due to the handover timing
Mull this over for a moment.
Suppose an AI system and a human-in-the-loop are working together on a real-time task that involves running a large-scale machine in a factory. The AI detects that the machinery is going haywire. Rather than the AI continuing to retain control, the AI abruptly hands control over to the human. The machinery in this factory is speedily going toward pure mayhem and there is no time left for the human to take corrective action.
The AI has handed the hot potato over to the human-in-the-loop and jammed up the human with the veritable hot potato such that the circumstances are no longer humanly possible to cope with. Tag, you are it, goes the old line when playing tag games as a child. The human is shall we say tagged with the mess.
Just like my example about the basketball game.
Why did the AI do the handover?
Well, unlike when a human abruptly hands over a basketball and then does some wild handwaving about why they did so, we can usually examine the AI programming and figure out what led to the AI doing this kind of hot potato handover.
An AI developer might have decided beforehand that when the AI gets into a really bad predicament, the AI should proceed to give control to the human-in-the-loop. This seems perfectly sensible and reasonable. The human might be “the better player” on the field. Humans can use their cognitive capabilities to potentially solve whatever problem is at hand. The AI has possibly reached the limits of its programming and there is nothing else constructive that it can do in the situation.
If the AI had done the handover with a minute left to go before the machinery went kablam, perhaps a minute heads-up is long enough that the human-in-the-loop can rectify things. Suppose though that the AI did the handover with three seconds left to go. Do you think a human could react in that time frame? Unlikely. In any case, just to make things even less quibbling, suppose that the handoff to the human-in-the-loop occurred with a few nanoseconds left to go (a nanosecond is one billionth of a second, which by comparison a fast blink of the eye is a sluggish 300 milliseconds long).
Could a human-in-the-loop sufficiently react if the AI has handed the hot potato with mere teensy-weensy split seconds left to take any overt action?
The handoff is more of falsehood than it otherwise might appear to be.
In reality, the handoff is not going to do any good when it comes to the dire predicament. The AI has pinched the human into becoming the goat.
Some AI developers do not think about this when they devise their AI. They (wrongly) blissfully do not take into account that time is a crucial factor. All they do is opt to program a handover when things get tough. When there is nothing left for the AI to do constructively, toss the ball to the human player.
AI developers might fail to give any devoted thought to this at the time of coding the AI, and they then often double-fail by failing to do testing that brings this up to the light. All that their testing shows is that the AI “dutifully” did a handoff when the limits of the AI were reached. Voila, the AI is presumed to be good and ready to go. The testing didn’t include an actual human that was placed into that unenviable and impossible position. There wasn’t a proper human-in-the-loop testing process that might have protested that this blink of an eye handoff at the last moment, or indeed past the last moment, did them little or no good.
Of course, some AI developers will have astutely considered this type of predicament, wisely so.
After mulling over the conundrum, they will proceed to program the AI to act this way, anyway.
Because there is nothing else to do, at least in their mind. When all else fails, hand control to the human. Maybe a miracle will occur. The gist though is that this isn’t of concern to the AI developer, and they are giving the human the last chance to cope with the mess at hand. The AI developer washes their hands of whatever happens thereafter.
I want to clarify that AI developers are not the sole devisers of these hot potato designs. There is a slew of other stakeholders that come to the table for this. Perhaps a systems analyst that did the specifications and requirements analysis had stated that this is what the AI is supposed to do. The AI developers involved crafted the AI accordingly. The AI project manager might have devised this. The executives and management overseeing the AI development might have devised this.
Everyone throughout the entirety of the AI development life cycle might have carried forward this same design of the hot potato. Whether anyone noticed it, we can’t say for sure. If they did notice, those might have been labeled as naysayers and shunted aside. Others might have had the matter brought to their attention, but they didn’t comprehend the repercussions. They felt it was a technical bit of minutia that was not within their scope.
I will add to this informal list of “reasons” a much more nefarious possibility.
An AI Hot Potato Syndrome is sometimes intentionally employed because those making the AI wanted to have their impassioned claim to plausible deniability.
Get yourself ready for this part of the tale.
In the case of the factory machinery that goes haywire, there is bound to be a lot of finger-pointing about who is responsible for what happened. In terms of operating the machinery, we had an AI system doing so and we had a human-in-the-loop doing so. These are our two basketball players, metaphorically.
The clock was running down and the machinery was on the verge of going kaboom. Let’s say that you and I know that the AI did a handover to the human-in-the-loop, doing so with insufficient time left for the human to take any sufficient action to rectify or avoid the disaster. Nobody else realizes this is what took place.
The firm that makes the AI can in any case immediately declare that they aren’t at fault because the human had control. According to their impeccable records, the AI was not in control at the time of the kaboom. The human was. Therefore, clearly, it is patently obvious that a human is at fault.
Is the AI company basically lying when making this outspoken assertion?
No, they seem to be telling the truth.
When asked whether they are sure that the AI wasn’t in control, the company would loudly and proudly proclaim that the AI was not at all in control. They have documented proof of this assertion (assuming that the AI kept a log of the incident). In fact, the AI company executives might raise their eyebrows in disgust that anyone would challenge their integrity on this point. They would be willing to swear on their sacred oath that the AI was not in control. The human-in-the-loop had control.
I trust that you see how misleading this can be.
Yes, the human was handed the control. In theory, the human was in control. The AI was no longer in control. But the lack of available timing and notification pretty much makes this an exceedingly hollow claim.
The beauty of this, from the AI maker’s perspective, would be that few could challenge the claims being proffered. The AI maker might not release the logs of the incident. Doing so could give away the rigged situation. The logs are argued as being Intellectual Property (IP) or otherwise of a proprietary and confidential nature. The firm would likely contend that if the logs were shown, this would showcase the secret sauce of their AI and deplete their prized IP.
Imagine the plight of the poor human-in-the-loop. They are baffled that everyone is blaming them for letting things get out of hand. The AI “did the right thing” and handed control over to the human. This might have been what the specifications said to do (again, though, the specs were remiss in not taking into account the timing and feasibility factors). The logs that haven’t been released but are claimed to be ironclad by the AI maker attest to the absolute fact that the human had been given control by the AI.
You could declare this as a pitiful slam-dunk on the baffled human that is almost certainly going to take the fall.
The odds are that only if this goes to court would the reality of what took place end up being revealed. If the shrewd legal beagles are aware of this type of gig, they would try to legally obtain the logs. They would need to get an expert witness (something I’d done from time to time) to decipher the logs. The logs alone might not be enough. The logs could be doctored or altered, or purposely devised to not showcase the details clearly. As such, the AI code might need to be delved into too.
Meanwhile, all throughout this agonizing and lengthy process of legal discovery, the human-in-the-loop would look really bad. The media would paint the person as irresponsible, lost their head, failed to be diligent, and ought to be held fully accountable. Possibly for months or years, during this process, that person would still be the one that everyone pointed an accusing finger at. The stench might never be removed.
Keep in mind too that this same circumstance could easily happen again. And again. Assuming that the AI maker didn’t change up the AI, whenever a similar last-minute situation arises, the AI is going to do that no-time-left handoff. One would hope that these situations aren’t happening frequently. On the rare occasions where it occurs, the human-in-the-loop is still the convenient fall guy.
It is a devilish trick.
You might want to insist that the AI maker has done nothing wrong. They are telling the truth. The AI gave up control. The human was then considered in control. Those are the facts. No sense in disputing it.
Whether anyone wises up and asks the tough questions, plus whether the AI maker answers those questions in any straightforward way, this is something that seems to rarely happen.
- When did the AI do the handover to the human-in-the-loop?
- On what programmed basis did the AI do the handover?
- Was the human-in-the-loop given sufficient time to take over control?
- How was the AI designed and devised for these quandaries?
- And so on.
To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including (perhaps surprisingly or ironically) the assessment of how AI Ethics gets adopted by firms.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.
At this juncture of this weighty discussion, I’d bet that you are desirous of some additional illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the AI Hot Potato Syndrome, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Hot Potato Syndrome
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I hope that provides a sufficient litany of caveats to underlie what I am about to relate.
For fully autonomous vehicles there might not be any chance of a handoff occurring between the AI and a human, due to the possibility that there isn’t any human-in-the-loop to start with. The aspiration for many of today’s self-driving car makers is to remove the human driver completely from the driving task. The vehicle will not even contain human-accessible driving controls. In that case, a human driver, if present, won’t be able to partake in the driving task since they lack access to any driving controls.
For some fully autonomous vehicles, some designs still allow for a human to be in-the-loop, though the human does not have to be available or partake in the driving process at all. Thus, a human can participate in driving, if the person wishes to do so. At no point though is the AI reliant upon the human to perform any of the driving tasks.
In the case of semi-autonomous vehicles, there is a hand-in-hand relationship between the human driver and the AI. For some designs, the human driver can take over the driving controls entirely and essentially stop the AI from partaking in the driving. If the human driver wishes to reinstate the AI into the driving role, they can do so, though this then sometimes forces the human to relinquish the driving controls.
Another form of semi-autonomous operation would entail the human driver and the AI working together in a teaming manner. The AI is driving and the human is driving. They are driving together. The AI might defer to the human. The human might defer to the AI.
At some juncture, the AI driving system might computationally ascertain that the self-driving car is heading into an untenable situation and that the autonomous vehicle is going to crash.
As an aside, some pundits are going around claiming that self-driving cars will be uncrashable, which is pure nonsense and an outrageous and wrongheaded thing to say, see my coverage at the link here.
Continuing the scenario of a self-driving car heading toward a collision or car crash, the AI driving system might be programmed to summarily hand over the driving controls to the human driver. If there is sufficient time available for the human driver to take evasive action, this indeed might be a sensible and proper thing for the AI to do.
But suppose the AI does the handover with a fraction of a split-second left to go. The reaction time of the human driver is not anywhere near fast enough to adequately respond. Plus, if miraculously the human was fast enough, the odds are that there are no viable evasive actions that can be undertaken with the limited time remaining before the crash. This is a twofer: (1) insufficient time for the human driver to take action, (2) insufficient time that if action was possible by the human driver that the action could be carried out in the deficient amount of time provided.
All in all, this is akin to my earlier discussion about the basketball buzzer situation and the factory machinery that went berserk scenario.
Let’s add the nefarious ingredient to this.
An automaker or self-driving tech firm doesn’t want to get tagged with various car crashes that have been happening in their fleet. The AI driving system is programmed to always toss control over to the human driver, regardless of whether there is sufficient time for the human driver to do anything about the predicament. Whenever a car crash occurs of this kind, the automaker or self-driving tech firm is able to vocally insist that the human driver was at the controls, while the AI was not.
Their track record for AI driving systems seems to be stellar.
Not once is the AI driving system “at fault” for these car crashes. It is always those darned human drivers that don’t seem to keep their eyes on the road. We might tend to swallow this blarney and believe that the all-precise AI is likely never wrong. We might tend to believe (since we know by experience) that human drivers are sloppy and make tons of driving mistakes. The logical conclusion is that the human drivers must be the culprit responsible, and the AI driving system is wholly innocent.
Before some self-driving advocates get upset about this characterization, let’s absolutely acknowledge that the human driver might very well be at fault and that they should have taken sooner action, such as taking over the driving controls from the AI. There is also the chance that the human driver could have done something substantive when the AI handed over the driving controls. Etc.
The focus here has been on the circumstances wherein the AI was considered the driver of the vehicle and then abruptly and with little attention to what a human driver might be able to do, tosses the hot potato to the human driver. This is also why so many are concerned about the dual driving role of semi-autonomous vehicles. You might say that there are too many drivers at the wheel. The aim, it seems, would be to settle the matter by having fully autonomous vehicles that have no need for a human at the wheel and the AI is always driving the vehicle.
This brings up the allied question about what or who is responsible when the AI is driving, which I’ve addressed many times in my columns, such as the link here and the link here.
We need to be careful when hearing or reading about car crashes involving semi-autonomous vehicles. Be wary of those that try to fool us by proclaiming that their AI driving system has an unblemished record. The conniving ploy of the AI Hot Potato Syndrome might be in the mix.
For companies that try to be tricky on these matters, perhaps we can keep near to our hearts the famous line by Abraham Lincoln: “You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.”
I’ve tried to reveal herein the AI magic hidden behind the screen and at times placed under the hood, which I’ve elucidated so that more people won’t be fooled more of the time.
AOC Parody Account Removed From Twitter – Likely Just The First Of Many To Come
The 15 Biggest Risks Of Artificial Intelligence
Is Coding Education – As We Know It – Dead? How Large Language Models Are Changing Programmers