One of the most neglected and altogether forgotten types of discrimination is ageism.
According to the American Psychological Association (APA): “The number of Americans 60 and older is growing, but society isn’t embracing the aging population” (APA, Monitor on Psychology, Volume 34, Number 5). Furthermore, the APA says this: “Whether battling ‘old geezer’ stereotypes or trying to obtain equal standing in the workplace, those who are 60 or older may all too often find themselves the victims of ageism” (ibid).
The official definition as prescribed by the World Health Organization (WHO) is that ageism consists of:
- “Ageism refers to the stereotypes (how we think), prejudice (how we feel) and discrimination (how we act) toward others or oneself based on age” (WHO, Ageing: Ageism, March 2021).
Notice that the WHO definition allows that ageism can be directed at all ages whether young or old. For example, you can readily likely recall situations involving ageism toward younger people such as a workplace remark that a newbie worker is too young to proffer work-related insights or that they are treated in a patronizing way by seasoned managers that unduly assume that someone youthful has marginal workplace acumen.
For today’s discussion, I am going to focus on ageism as directed toward older people.
Do though keep in mind that ageism can be applied to younger people and essentially anyone of any age. That being said, many tend to construe ageism as aimed at older people and ergo I’ll cover that particular form of age discrimination herein (yikes, is that willingness to emphasize elder-oriented ageism in of itself yet another instance of ageism?).
Research scholars tend to suggest that the realization of ageism as a phenomenon worthy of distinctive study and analysis can be traced to the coining of ageism by Pulitzer Prize-winning Dr. Robert Butler in his article of 1969 entitled “Age-Ism: Another Form of Bigotry” (published in The Gerontologist, Volume 9, Number 4). His Pulitzer Prize for General Non-Fiction came, later on, arising as a result of his 1975 landmark book Why Survive? Being Old In America and for which garnered great attention in its time.
In Butler’s pioneering article on ageism, he recounts the numerous forms of discrimination such as based on race, gender, and the like, and then brings up this (at the time) startling point:
- “However, we may soon have to consider very seriously a form of bigotry we now tend to overlook: age discrimination or age-ism, prejudice by one age group toward other age groups. If such bias exists, might it not be especially evident in America; a society that has traditionally valued pragmatism, action, power, and the vigor of youth over contemplation, reflection, experience, and the wisdom of age?” (quoted from his 1969 article).
To help emphasize the powerful and damaging effects of ageism, Butler mentioned this rather harsh but perhaps accurate reflection about the topic: “Age-ism is manifested in the taunting remarks about ‘old fogeys,’ in the special vulnerability of the elderly to muggings and robberies, in age discrimination in employment independent of individual competence, and in the probable inequities in the allocation of research funds” (ibid).
Ageism still flourishes to this day.
I say this lest you might somehow assume that after Butler’s remarks in the late 1960s and into the 1970s and beyond, we as a society magically expunged ageism. Not so. You can claim with a reasonable argument that we have as much ageism now as we did in the past, perhaps more so as a result of the larger populations of today and the scaled-up proportion that fits into the “older” aged classifications.
Of course, you can also counter-argue that we are more aware of ageism than we were in the past, plus there are more laws and ethical guidelines associated with detecting and overcoming ageism discrimination. The gist is that though ageism is nowadays a realized factor of concern, there is still nonetheless a tendency to not have ageism at top of mind or to believe that ageism isn’t as serious or worthy of attention as might be given to other forms of discrimination.
You might say that ageism is neglected and forgotten chosen one.
What can we do about ageism, you might be wondering.
Generally, according to WHO, we need to do three things to combat ageism (this is my take):
1) Enact suitable policies and laws regarding overcoming ageism
2) Undertake appropriate educational and informational efforts about ageism
3) Perform intergenerational interventions underlying ageism
As a quick quotable version of the WHO formalized position statement, this is what they recommend: “Policy and law can address discrimination and inequality on the basis of age and protect the human rights of everyone, everywhere. Educational activities can enhance empathy, dispel misconceptions about different age groups and reduce prejudice by providing accurate information and counter-stereotypical examples. Intergenerational interventions which bring together people of different generations, can help reduce intergroup prejudice and stereotypes” (per WHO as cited above).
I’d like to add yet another factor to the considerations about dealing with ageism.
Are you ready?
You might need to sit down for this pronouncement.
Artificial Intelligence (AI).
Yes, AI can be used to contribute toward ageism and discrimination based on age.
This might come as a surprise. Wouldn’t AI be something that will heroically aid in eliminating discriminatory practices such as those involving ageism? It sure would seem that AI ought to be a helper rather than a hindrance in reducing ageism. Well, hang onto your hat, since it turns out that AI has the potential for making ageism even worse.
AI can be devised to not only leverage ageism, but the AI could also turn up the onslaught toward promoting and spreading ageism. In a kind of dual-use mode, AI with the vastness and pervasiveness of computing can scale up ageism in ways that we never before would have imagined. I’ve discussed at length the dual-use possibilities of AI, whereby AI can be used for good or it can be used for bad, see my coverage at the link here.
All in all, the specter of AI-based or AI-empowered ageism raises a slew of AI Ethics and AI Law considerations. For my ongoing and extensive analyses of AI Ethics and AI Laws, see the link here and the link here, just to name a few.
Let’s go ahead and unpack the AI ageism discrimination conception and see what we can make of it.
I’d like to first lay some essential foundation about AI and particularly AI Ethics and AI Law, doing so to make sure that the topic of AI ageism will be contextually sensible.
The Rising Awareness Of Ethical AI And Also AI Law
The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.
I want to make abundantly sure that we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
Be very careful of anthropomorphizing today’s AI.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Justice & Fairness
- Freedom & Autonomy
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.
In the AI Bill of Rights, there are five keystone categories:
- Safe and effective systems
- Algorithmic discrimination protections
- Data privacy
- Notice and explanation
- Human alternatives, consideration, and fallback
I’ve carefully reviewed those precepts, see the link here.
Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of AI-related ageism.
Get yourself ready for an eye-opening informative journey.
AI Ageism Is Here And Now So Be On The Watch
A few vital caveats and comments before we leap into AI ageism with both feet.
When I refer to AI ageism, sometimes the matter is misinterpreted to seemingly suggest that people are discriminatory toward AI based on their age. That’s not what I am intending to cover herein. I will though add that there are lots of studies about how age can be a factor in whether someone opts to use AI, along with whether they have a tendency to trust or believe in the use of AI. You might want to take a look at my prior columns covering that topic.
My goal herein is to examine how AI can bring forth or stoke ageism.
We can immediately put to bed the notion of AI having “personal” biases associated with the age of people as though the AI is sentient. Per my earlier remarks that we don’t have sentient AI, we don’t need now to be noodling on whether sentient AI is going to be discriminatory toward people based on their age. If we ever do attain sentient AI, certainly there will be a possibility of such a discriminatory “mindset” – but if we achieve super-intelligent AI there is always the hope that it will be smarter than humans and reject categorically any forms of discrimination (assuming that the super savvy AI overlord doesn’t opt to wipe out all of humankind).
Let’s keep this ageism discussion focused on today’s non-sentient AI.
How could contemporary AI harbor ageism?
That’s easy-peasy to describe.
Suppose a company decides they want to devise AI that will aid in hiring. The AI developers proceed to use Machine Learning and Deep Learning. In this case, tons of data from within the existing databases of the company is used to train the AI. All of the hiring done in the last forty years of the company’s history is pumped into the ML/DL.
Voila, after tuning the ML/DL, a tool is now available for managers seeking to do the hiring. The managers feed the resume of an applicant into the AI tool. The AI tool spits out a score that says whether the candidate is worthy of consideration for getting hired. If the score is low, the manager is supposed to reject the applicant outright. No need to waste time on someone that the AI “advises” is not worthwhile.
This at first seems like a great time saver for the company. No more spinning of the wheels by exploring candidates that the AI has mathematically and computationally ascertained are not viable for working at the firm. Managers can use their precious and limited time to only scrutinize applicants that garner a sufficiently high score by the AI. The hiring process has been improved multifold and everyone is as happy as can be.
Except for candidates that turn out to be over the age of 60.
Upon an audit of the AI (for my overall coverage of the importance of AI audits, see the link here, and also for my analysis of the flawed NYC hiring law on AI biases auditing see the link here), a belated discovery is that the historical data used to train the AI did not include the hiring of those over the age of 60. As such, the computational pattern matching “found” a kind of handy factor for weaning out applicants. Anyone that was at the age of 60 or above would directly get an extremely low score. Based on the sole factor of age, the applicants were getting pre-screened by the AI.
Did the company intentionally do this?
Possibly not. It could be that the company all along has had an unwritten unspecified ageism discrimination cultural bent. This wasn’t printed in any hiring booklets. Nobody said this aloud when doing the hiring process. The historical data ended up capturing silently this bias.
I’d like to add that the other possibility that there was an overt ageism bias going on is also notably of concern. In that sense, the AI has landed on the same bias, doing so via mathematical data analyses and not due to an outright programming purpose.
I’d like to add to the additional point aforementioned that it is certainly possible for an AI system to be purposefully programmed for ageism. Thus, even if you don’t perchance use historical data for training, an AI developer could write code that includes ageism aspects. Again, the AI developer might be aware they are doing so, or they might not be cognizant that the manner of their coding is bringing an element of ageism into the AI.
We have these circumstances as shaped by implicit or explicit desires:
- AI Implicit Historical: AI that is based on historical data lands on implicit ageism
- AI Explicit Historical: AI that is based on historical data lands on explicit ageism
- AI Implicit Coding: AI that is programmed by AI developers includes implicit ageism coding
- AI Explicit Coding: AI that is programmed by AI developers includes explicit ageism coding
Waiting to do AI audits until long after the fact of devising or using such AI is going to be a problematic issue for a company. Once the AI has been put into use, the presumption is that someone is going to be discriminated against as a result of the ageism in the AI. Those that are discriminated against can then proceed to come after the company for damages as a result of the ageism bias.
Firms that don’t have their act together are opening themselves to lots of risks and liabilities.
One obvious aspect is that the company will suffer reputational losses after word spreads that the company has been discriminating based on ageism. You can also bet that lawsuits will ensue. Those are bound to be costly to defend or later settle.
Existing laws can come to play, along with the newer AI-focused laws. There might be criminal charges brought against the company and its executives. The government could exercise all manner of regulatory levers to try and deal with any firm that has exhibited ageism discrimination via its use of AI. Right now, those are instances that garner a lot of public attention and are especially headline-grabbing.
I work with many executives that say they were completely shocked and unaware that the AI had been devised to contain ageism. They claim they entirely relied upon the maker of the AI software to ensure that no such discriminatory tendency existed. As busy executives, they didn’t have time to look into such details.
Sorry, but that won’t cut it as an excuse.
The traditional “I didn’t know” or the “I had no idea” is unlikely to provide you with a get-out-of-jail-free card. If the AI was put into use under your watch, you in a sense own it. If the AI was put into place before you came on board, you still own it. Your best bet is to right away get AI audits undertaken.
The other facet involves making sure at the get-go that any AI ageism biases are detected and removed. For those of you that license or purchase any kind of HR hiring-related packages, do your due diligence at the front end of things. Better to do so before the horse is already out of the barn.
AI ageism can exist in a variety of other ways too.
My example so far was about hiring. The thing is, there are plenty of other opportunities to exercise ageism in a company. What about when doing promotions? What about deciding who gets training or other special company benefits? What about layoffs?
Any use of AI for any type of job-related functionality is vulnerable to and a hidden source of AI ageism.
Notice that I said that AI ageism can be hidden. Indeed, this is one of the most insidious aspects of the use of AI and its ageism potential. A human manager that was exhibiting ageism tendencies might get caught doing so. An AI system that is a black box might be doing so and yet no one is privy to how the AI is working. I’ve discussed at length the importance of explainable AI (known as XAI), see the link here.
Research On AI Ageism Finally Getting Devoted Interest
You can find bits and pieces of AI ageism research efforts here or there, but by and large, the topic has been lumped into studies of AI discriminatory practices overall.
A recent study devoted to the AI ageism topic provides a useful and important springboard for those interested in pursuing this needed and growing area of interest. There is little doubt that AI ageism is going to get worse and worse. I say this because so few realize it exists, and due to the prevalence of AI that continues to expand and become ubiquitous throughout our lives.
In a key study published in AI & Society entitled “AI Ageism: A Critical Roadmap For Studying Age Discrimination And Exclusion In Digitalized Societies,” researcher Justyna Stypinska says this:
- “AI ageism can be defined as practices and ideologies operating within the field of AI, which exclude, discriminate, or neglect the interests, experiences, and needs of older population and can be manifested in five interconnected forms: (1) age biases in algorithms and datasets (technical level), (2) age stereotypes, prejudices and ideologies of actors in AI (individual level), (3) invisibility of old age in discourses on AI (discourse level), (4) discriminatory effects of use of AI technology on different age groups (group level), (5) exclusion as users of AI technology, services and products (user level)” (article published online October 3, 2022).
Those five categories of interconnected forms of AI ageism are a useful framework for thinking about how to examine and ultimately contend with AI ageism in all shapes and sizes.
For example, consider how AI is being devised to suit the needs of older people and yet might be done without a proper understanding of what it means to align with their corresponding needs. This is happening right now in the transportation field such as the advent of AI-based self-driving cars that are targeting those in assisted living facilities (see my column coverage of autonomous vehicles such as the link here). Likewise, there is so-called smart housing that consists of domiciles tricked out to provide specialized assistance to older people. And so on.
Per the words of Stypinska: “I argue that the older population is one group and social category that not only is being excluded from processes of development and deployment of AI, but is also invisible in the debate on ethical, inclusive, and fair AI” (article as cited above).
For those of you that might be thinking that you aren’t currently subject to ageism, assuming that we go with the emphasis on older people and that you are less than old, right now, you might be of the nonchalant view that this isn’t a topic on your radar.
I’ll invoke again the words of Stypinska on the inevitably of ageism as a concern for us all: “Ageism, however, is the only prejudice which will inevitably affect everyone, regardless of their gender, race, or other characteristic. Despite its ubiquitous nature, it is still a type of discrimination, which is not recognized as easily as sexism or racism as it often operates in a more subtle, yet corrosive manner” (article as cited above).
Ageism is coming for you, day by day.
AI ageism will likely keep growing, day by day.
By straightforward logic, AI ageism will inevitably and indubitably be coming for you.
That’s perhaps a wake-up call for some.
I don’t want to seem as though I am trying to scare anyone into becoming aware of AI ageism. The fact is that AI ageism is generally unknown, and we need to do what we can to make this into a front-and-center topic when mulling over the myriad of ways that AI will be undertaking discriminatory actions.
The AI of today isn’t doing this in a sentient capacity.
The AI does this because we devise or allow the AI to either imbue ageism or self-adjusts to inject ageism, mathematically and computationally. Plus, there are various other AI-related ageism avenues, as noted in the five ways of the noted framework.
Those that are into AI ought to be thinking about and contending with AI ageism. Those outside of AI should likewise be informed about and seek to contend with AI ageism. This is an “ageless” topic in that it encompasses all of us.
My hope is that you will be inspired to either ferret out AI ageism or join in the efforts to explain what AI ageism is and why it is important as an endeavor of purposeful focus and requires determined and persistent resolve.
I’ll try to end this discourse on a somewhat lighter note.
The famous satirist Will Rogers said this about aging: “Eventually you reach a point when you stop lying about your age and start bragging about it.”
To try and cope with today’s AI ageism, you almost need to misstate or keep hidden your age, or else the AI will computationally potentially latch onto you in potentially adverse and biased ways. On a proxy discriminatory basis, the AI might anyway estimate your age and go after you too (see my discussion of AI proxy discrimination, at the link here).
Time to bring AI ageism into the limelight.
Let’s aim to make sure that AI ageism gets retired long before it gets stuck in its inequitable ways.