They say that there is an exception to every rule.
The problem though is that oftentimes the standing rule prevails and there is little or no allowance for an exception to be acknowledged nor entertained. The average-case is used despite the strident possibility that an exception is at the fore. An exception doesn’t get any airtime. It doesn’t get a chance to be duly considered.
I’m sure you must know what I am talking about.
Have you ever attempted to obtain some kind of individualized customer service whereby you were mindlessly treated without any distinction for your particular case and your specific needs?
This has undoubtedly happened to you, likely countless times.
I am going to take you through a disturbing trend that is arising about how Artificial Intelligence (AI) is being relentlessly devised to force fit everything into the one size fits all paradigm.
Exceptions are either not detected or opted to be bent out of shape as though they were not exceptions at all. The stoking basis for this is partially due to the advent of Machine Learning (ML) and Deep Learning (DL). As you will shortly see, ML/DL is a form of computational pattern matching, the likes of which is “easier” to develop and deploy if you are willing to ignore or skirt around exceptions. This is highly problematic and raises keenly notable AI Ethics concerns. For my overall ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Things don’t have to be that way and please know that this is being stoked by those that are making and deploying AI by choosing to ignore or downplay the handling of exceptions within their AI concoctions.
When Exceptions Rule
Let’s first unpack the nature of the average-case versus the realization of exceptions.
My favorite example of this type of dogpiling or myopically average-case no-exceptions approach is vividly illuminated by nearly any episode of the acclaimed and still rather immensely popular TV series known as House, M.D. (usually just expressed as House, which ran from 2004 to 2012 and can be viewed today on social media and other media outlets). The show entailed a fictional character named Dr. Gregory House that was gruff, insufferable, and quite unconventional, yet he was portrayed as a medical genius that could ferret out the most obscure of diseases and ailments. Other doctors and even patients might not have necessarily liked him, but he got the job done.
Here’s how a typical episode played out (generic spoiler alert!).
A patient shows up at the hospital where Dr. House is on staff. The patient is initially presenting somewhat common symptoms and various other medical doctors take their turns trying to diagnose and cure the patient. The odd thing is that the attempts to aid the patient either fail to improve the adverse conditions or worse still tend to backfire. The patient gets worse and worse.
Because the patient is now seen as a kind of medical curiosity, and since nobody else can figure out what the patient is suffering from, Dr. House is brought into the case. This is at times done purposely so as to tap into his medical prowess, while in other instances he hears about the case and his innate instincts draw him toward the unusual circumstances.
We gradually find out that the patient has some extremely rare malady. Only Dr. House and his team of medical interns are able to figure this out.
Now that I’ve shared with you the mainstay plotline of the episodes, let’s dive into lessons learned that illustrate the nature of the average-case versus exceptions.
The fictional stories are designed to showcase how thinking inside-the-box can at times sorely miss the mark. All the other doctors that are at first attempting to aid the patient are clouded in their thinking processes. They want to force the symptoms and presented facets into a conventional medical diagnosis. The patient is merely one of many that they have presumably seen before. Examine the patient and then prescribe the same treatments and medical solutions that they have repeatedly used throughout their medical careers.
Wash, rinse, repeat.
In one sense, you can justify this approach. The odds are that most patients will have the most common ailments. Day after day, these medical doctors encounter the same medical issues. You could suggest that the patients entering the hospital are veritably on a medical assembly line. Each one flows along the hospital’s standardized protocols as though they are parts of a manufacturing facility or assembly plant.
The average-case prevails. Not only is this generally suitable, but it also allows the hospital and the medical staff to optimize their medical services accordingly. Costs can be lowered when you devise the medical processes to handle the average-case. There is a quite famous piece of advice often drummed into the minds of medical students, namely that if you hear hoof sounds coming from the street, the odds are that you should be thinking of a horse rather than a zebra.
Efficient, productive, effective.
Until an exception sneaks into the midst.
Maybe a zebra from the zoo has escaped and has wandered down your street.
Does this mean that exceptions ought to be the rule and we should set aside the average-case rule in lieu of focusing exclusively on exceptions only?
You would be hard-pressed to assert that all of our everyday encounters and services should be focused on exceptions rather than the average-case.
Note that I am not making such a suggestion. What I am claiming is that we ought to ensure that exceptions are allowed to occur and that we need to recognize when exceptions arise. I mention this because some pundits are apt to loudly proclaim that if you are a proponent of recognizing exceptions you must ergo be opposed to devising for the average-case.
That’s a false dichotomy.
Don’t fall for it.
We can have our cake and eat it too.
Making The Case For A Right To Be An Exception
I’ll next perhaps provide a bit of a shock that relates all of this to the burgeoning use of AI.
AI systems are increasingly being crafted to concentrate on the average-case, often to the exclusion or detriment of recognizing exceptions.
You might be surprised to know that this is happening. Most of us would assume that since AI is a form of computer automation, the beauty of automating things is that you can usually incorporate exceptions. This can usually be done at a lessened cost than if you were using human labor to perform a like service. With human labor, it might be costly or prohibitive to have all manner of labor available that can deal with exceptions. Things are a lot easier to manage and put into place if you can assume that your customers or clients are all of the average-case calibers. But the use of computerized systems is supposed to accommodate exceptions, readily so. In that way of thinking, we ought to be cheering uproariously for more computerized capabilities coming to the forefront.
Consider this as a mind-bending conundrum and take a moment to reflect on this vexing question: How can AI that is otherwise assumed to be the best of automation seemingly inexorably marching down the routinized and exceptionless path that ironically or unexpectedly we imagined would be going the exact opposite direction?
Answer: Machine Learning and Deep Learning are taking us to an exceptionless existence, though not because we have to compulsorily take that path (we can do better).
Let’s unpack this.
Suppose that we decide to use Machine Learning to devise AI that will be used to figure out medical diagnoses. We collect a bunch of historical data about patients and their medical circumstances. The ML/DL that we set up tries to undertake a computational pattern matching that will examine symptoms of patients and render an expected ailment associated with those symptoms.
Based on the fed-in data, the ML/DL mathematically ascertains symptoms such as a runny nose, sore throat, headaches, and achiness are all strongly associated with the common cold. A hospital opts to use this AI to do pre-screening of patients. Sure enough, patients reporting those symptoms upon first coming to the hospital are “diagnosed” as likely having a common cold.
Shifting gears, let’s add a Dr. House kind of twist to all of this.
A patient comes to the hospital and is diagnosed by the AI. The AI indicates that the patient appears to have a common cold based on the symptoms of runny nose, sore throat, and headaches. The patient is given seemingly suitable prescriptions and medical advice for dealing with a common cold. This is all part and parcel of the average-case approach used when devising AI.
Turns out that the patient ends up having these symptoms for several months. An expert in rare diseases and aliments realizes that these same symptoms could be reflective of a cerebrospinal fluid (CSF) leak. The expert treats the patient with various surgical procedures related to such leaks. The patient recovers (by the way, this remarkable story about a patient with a CSF leak that was initially diagnosed as having a common cold is based loosely on a real medical case).
We now will retrace our steps in this medical saga.
Why didn’t the AI that was doing the intake pre-screening able to assess that the patient might have this rare ailment?
One answer is that if the training data used for crafting the ML/DL did not contain any such instances, there would be nothing therein for the computational pattern matching to match onto. Given an absence of data covering exceptions to the rule, the general rule or average-case itself will be considered as seemingly unblemished and applied without any hesitation.
Another possibility is that there was say an instance of this rare CSF leak in the historical data, but it was only one particular instance and in that sense an outlier. The rest of the data was all mathematically close to the ascertained average-case. The question then arises as to what to do about the so-called outlier.
Please be aware that dealing with these outliers is a matter that wildly differs as to how AI developers might decide to contend with the appearance of something outside of the determined average-case. There is no required approach that AI developers are compelled to take. It is a bit of a Wild West as to what any given AI developer might do in any given exception-raising instance of their ML/DL development efforts.
Here is my list of the ways that these exceptions are often inappropriately handled:
- Exception assumed as an error
- Exception assumed as unworthy
- Exception assumed as adjustable into the “norm”
- Exception not noticed at all
- Exception noticed but summarily ignored
- Exception noticed and then later forgotten
- Exception noticed and hidden from view
An AI developer might decide that the rarity is nothing more than an error in the data. This might seem odd that anyone would think this way, especially if you try to humanize it by for example imagining that the patient with the CSF leak is that one instance. There is a powerful temptation though that if all of your out-of-context data says basically one thing, perhaps consisting of thousands upon thousands of records and they are all converging to an average-case, the occurrence of one oddball piece of data can readily (lazily!) be construed as an outright error. The “error” might then be discarded by the AI developer and not considered within the realm of what the ML/DL is being trained on.
Another means of coping with an exception would be to decide that it is an unworthy matter. Why bother with one rarity when you are perhaps rushing to get an ML/DL up and running? Toss out the outlier and move on. No thought goes necessarily towards the repercussions down the road.
Yet another approach involves folding the exception into the rest of the average-case milieu. The AI developer modifies the data to fit within the rest of the norm. There is also the chance that the AI developer might not notice that the exception exists.
The ML/DL might report that the exception was detected, which then the AI developer is supposed to instruct the ML/DL about how the outlier is to be dealt with mathematically. The AI developer might put this on a To-Do list and later forget about coping with it or might just opt to ignore it, and so on.
All in all, the detection and resolution of dealing with exceptions when it comes to AI is without any specifically stipulated or compellingly balanced and reasoned approach per se. Exceptions are often treated like unworthy outcasts and the average-case is the prevailing winner. Dealing with exceptions is hard, can be time-consuming, requires a semblance of adroit AI development skills, and otherwise is a hassle in comparison to lumping things into a nifty bowtie of a one-size fits all package.
To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one.
Into this particular discussion about the role of exceptions comes a provocative viewpoint that perhaps there ought to be a legal right associated with being an exception. It could be that the only viable means of getting bona fide recognition for someone possibly being an exception entails utilizing the long arm of the law.
Put in place a new kind of human right.
The right to be considered an exception.
Consider this proposal: “The right to be an exception does not imply that every individual is an exception but that, when a decision may inflict harm on the decision subject, the decision maker should consider the possibility that the subject may be an exception. The right to be an exception involves three ingredients: harm, individualization, and uncertainty. The decision maker must choose to inflict harm only when they have considered whether the decision is appropriately individualized and, crucially, the uncertainty that accompanies the decision’s data-driven component. The greater the risk of harm, the more serious the consideration” (by Sarah Cen, in a research paper entitled The Right To Be An Exception In Data-Driven Decision Making, MIT, April 12, 2022).
You might be tempted to assume that we already have such a right.
Not necessarily. Per the research paper, the likely closest akin internationally recognized human right might be that of individual dignity. In theory, the notion that there ought to be a recognition of dignity such that an individual and their specific uniqueness is supposed to be encompassed does get you within the ballpark of a potential human right of exception. One qualm is that the existing laws governing the dignity realm are said to be somewhat nebulous and overly malleable, thus not well-tuned to the specific legal construct of a right of exception.
Those that favor a new right that consists of a human right to be an exception would argue that:
- Such a right would pretty much legally force AI developers into explicitly coping with exceptions
- Firms making AI would be more legally on-the-hook for not dealing with exceptions
- AI would likely be better balanced and more robust overall
- Those using AI or subject to AI would be better off
- When AI doesn’t accommodate exceptions, legal recourse would be readily feasible
- Makers of AI are bound to be better off too (their AI would cover a wider range of users)
Those that are opposed to a new right labeled as a human right to be an exception tend to say:
- Existing human rights and legal rights sufficiently cover this and no need to complicate matters
- An undue burden would be placed on the shoulders of AI makers
- Efforts to craft AI would become costlier and tend to slow down AI progress
- False expectations would arise that everyone would demand they be an exception
- The right itself would undoubtedly be subject to differing interpretations
- Those that gain the most will be the legal profession when legal cases skyrocket
In short, the opposition to such a new right is usually arguing that this is a zero-sum game and that a legal right to be an exception is going to cost more than it beneficially derives. Those that believe such a new right is sensibly required are apt to emphasize that this is not a zero-sum game and that in the end everyone benefits, including those that make AI and those that use AI.
You can be sure that this debate encompassing legal, ethical, and societal implications associated with AI and exceptions is going to be loud and persistent.
Self-Driving Cars And The Importance Of Exceptions
Consider how this applies in the context of autonomous systems such as autonomous vehicles and self-driving cars. There have already been various criticisms about the average-case mindset of AI development for self-driving cars and autonomous vehicles.
For example, at first, very few self-driving car designs accommodated those that have some form of physical disability or impairment. There was not much thought being given to more widely encompassing a full range of rider needs. By and large, this awareness has increased, though concerns are still expressed about whether this is far enough along and as extensively embraced as it should be.
Another example of the average-case versus an exception has to do with something that might catch you off-guard.
Are you ready?
The design and deployment of many of the AI driving systems and self-driving cars of today tend to make a silent or unspoken assumption that adults will be riding in the self-driving car. We know that when a human driver is at the wheel there is of course an adult in the vehicle, by definition since usually getting a license to drive is based on being an adult (well, or nearly one). For self-driving cars that have AI doing all of the driving, there is no need for an adult to be present.
The point is that we can have children riding in cars by themselves without any adult present, at least this is possible in the case of fully autonomous AI-driven self-driving cars. You can send your kids to school in the morning by making use of a self-driving car. Rather than you having to give your kids a lift, or having to make use of a human driver of a ridesharing service, you can simply have your kids pile into a self-driving car and be whisked over to the school.
All is not rosy when it comes to having kids in self-driving cars by themselves.
Since there is no longer a need to have an adult in the vehicle, this implies that kids will also no longer feel influenced or shall we say controlled by the presence of an adult. Will kids go nuts and tear up the interior of self-driving cars? Will kids try to climb or reach outside the windows of the self-driving car? What other types of antics might they do, leading to potential injury and severe harm?
I’ve covered the heated debate about the idea of kids riding alone in self-driving cars, see the link here. Some say this should never be allowed. Some say it is inevitable and we need to figure out how to best make it work out.
Let’s return to the overarching theme of the average-case versus the exception.
We all seem to agree that there is always going to be some exception to the rule. Once a rule has been formed or identified, we ought to be looking for exceptions. When we encounter exceptions, we should be thinking about which rule this exception likely applies to.
Many of the AI being devised today is shaped around formulating the rule, while the challenges associated with exceptions tend to be forsaken and shrugged off.
For those that like to be smarmy and say that there are no exceptions to the rule that there are always exceptions to the rule, I would acknowledge that this witticism seems to be a mental puzzler. Namely, how can we have a rule that there are always exceptions, but then this very rule doesn’t seem to apply to the rule that there always are exceptions to the rule?
Makes your head spin.
Fortunately, there is no need to excessively complicate these sobering matters. We can hopefully live with the handy and vital rule-of-thumb that we should be looking out for and accommodating the exceptions to every rule.
That settles things, so now let’s get to work on it.
Canva Launches ‘Magic’ AI Tools For Its Design Software’s 125 Million Users
AI-Generated Images Of Donald Trump Getting Arrested Foreshadow A Flood Of Memes, Fake News
AI Will Revolutionize Your Inbox