Swense Tech

Best Solution For You

AI Ethics And AI Law Fretting Over Worker Burnout In The Ardent Pursuit Of Responsible AI

Worker burnout.

If there is one thing that we can almost all entirely agree on, I dare say it might be the abundance of worker burnout.

Nary a day goes by that there aren’t some blazing headlines about this worker or that worker-related burnout happening here or there. Some attribute burnout to concerns over wanting to keep their job and make a living. Others suggest that the burnout mania got especially underway when remote working became acceptable, pushing workers to potentially work nonstop and not have the conventional leave the office at 6 o’clock basis for curtailing work for the day. A slew of reasons exists and are continually bandied around for worker burnout.

Let’s talk about AI.

Those that work in the realm of Artificial Intelligence (AI) are right there in the worker burnout zone too.

Yes, with all that excitement and hoopla about the present and future prospects of AI, there are humans toiling away to craft and field the AI. Software developers that specialize in making AI applications are dearly sought by companies. Once onboard, the AI programmers are bound to discover that there is a lot of AI work going on. Indeed, the odds are that a veritable fifteen pounds of AI are needed and yet the AI teams are barely able to produce five pounds given the team size and AI complexities involved.

You’ve got AI developers in a mad rush or stoked frenzy to devise AI. There are also AI operational specialists that field the AI, and they too are on this zipping-ahead conveyor belt. Team leaders and managers overseeing AI projects are also speeding along as fast as they can.

Keep that AI moving.

Don’t stop.

More AI means that a firm is presumably going to be more profitable and more productive. Therefore, AI efforts are crucial to a company’s survival and being able to thrive in a highly competitive marketplace. The AI train is out of the station and barreling full speed ahead.

Welcome to the advent of AI worker burnout. AI workers are caught in this frenzied drive of putting AI into every nook and corner of today’s digital-savvy corporations. Get AI workers in the door. Work them mightily. Maybe be cognizant of potential worker burnout, though, then again, maybe don’t worry about it and just keep pushing the boundaries to see what happens.

As sage wisdom presumably tells us, you’ve got to crack some eggs to make an omelet.

Top executives and managers often believe in such worker sacrificial inevitabilities, though as you might imagine the HR (Human Relations) or talent-management side of a firm are quick to recoil at such outdated perspectives on worker motivation and administration.

Among the various AI worker roles, burnout seems heightened for those that are in the trenches dealing with AI Ethics and AI Law.

In this discussion, I am going to focus on AI worker burnout in this particular instance of those that are aiming to bring AI Ethics realizations to what a company is doing related to their AI. I’m also going to include the AI Law role too. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

Workers Focused On AI Ethics And AI Law

There is a role in AI that is perhaps sparingly known, though it is a key position that keeps growing as more AI appears and, shall we say, AI hits the proverbial fan. I’m talking about AI Ethicists and the rising realization that AI has got to abide by some form of Ethical AI precepts.

Firms that once were shoveling any old AI out the door are getting taken to task for AI that exhibits undue biases and works in discriminatory ways. Arising from this realization is the value of having some AI Ethics experts in the mix of AI programming and the burgeoning assembly line that is producing AI inside a company.

I’d like to also add an additional element encompassing those that are versed in AI Law in addition to those focused on AI Ethics.

AI Ethics comes at the AI formulation with a kind of “soft law” approach to getting AI into proper shape. We are gradually witnessing a wave of new laws specifically targeting AI systems, for which companies need to make sure they do not overstep these so-called “hard laws” about AI. The AI Law experts are likely to be part of the legal team in a company or might be utilized via the use of outside legal counsel. In any case, besides the AI Ethics infusion, there is also a gradually increasing AI Law infusion that is seemingly going to work hand-in-hand with each other.

The soft law of AI Ethics and the hard law of AI Law must be coupled together to ensure that the AI side of things is meeting societal cornerstones and the enacted legal keystones about AI too.

In theory, the AI Ethics crew and the AI Law crew are fully aligned. This is not necessarily always the case. When a misalignment exists, there can be confusion among the AI teams as to what to do. The legal beagles would seem to have the upper hand since violating laws is something that can have quite harsh teeth. That being said, the dangers associated with undercutting Ethical AI principles can also have demonstratively adverse consequences such as severe reputational losses and lawsuits aplenty.

As the famous jurist Earl Warren said: “In civilized life, law floats in a sea of ethics.”

Returning to the emphasis herein on worker burnout, the humans that are tasked with the particular role of bringing AI Ethics awareness and insight to the AI pursuits of a firm are finding themselves working at times beyond their breaking points. We can expect that those tasked with AI Law facets will inevitably find themselves in that same worker burnout boat.

Now, you might be of the mind that if AI developers are experiencing burnout, we shouldn’t be surprised and nor make it noteworthy that AI Ethics specialists are also members of the worker burnout club.

They are all in the same burnout league, as it were.

I would stridently proffer that the AI Ethics camp is in a somewhat different predicament, both in the nature of things and the outcome of these sobering matters. As you will see in a moment, worker burnout for those in AI Ethics is likely to be an especially lonely and somewhat vicious circumstance. Furthermore, some really bad trends could emerge if the AI Ethics worker burnout procession continues unabated.

Allow me to gingerly reveal how this is playing out.

When it comes to how firms consider AI Ethics amidst their AI endeavors, the range of approaches is long and replete with problematic concerns. This in turn complicates and undoubtedly undercuts the work of an AI Ethics specialist (AI Law too).

I’ll be discussing shortly some key details about these listed ramifications upon AI Ethics work and the worker burnout of AI Ethics specialists as per how companies deal with AI:

  • AI Ethics – Don’t Know. Firms that don’t know about AI Ethics and are totally in the dark on such matters
  • AI Ethics – Don’t Care. Firms that know about AI Ethics but have decided they don’t care and ergo aren’t going to do anything about it
  • AI Ethics – Lip Service. Firms that know about AI Ethics, care about it, but then give only marginal lip service to it
  • AI Ethics – Unintentional Mistakes. Firms that know about AI Ethics, care, are serious, but make all kinds of unintentional mistakes about it
  • AI Ethics – Shoe Stringer. Firms that know about AI Ethics, care, are serious, but decide to take a shoestring approach to it
  • AI Ethics – Radicalizer. Firms that know about AI Ethics, care, are serious, but go overboard and radicalize the matter
  • Other

Each of those approaches has a grand potential to lead right to the doorstep of AI Ethics worker burnout. On a related topic, I’ve discussed in-depth the importance of companies having an AI Ethics Advisory Board, which can be handy if done well and be a regrettable misfortune if done poorly, see my analysis at the link here.

Before leaping into the AI Ethics worker burnout topic, I’d like to first lay some essential foundation about AI and particularly AI Ethics and AI Law, doing so to make sure that the discussion will be contextually sensible.

The Rising Awareness Of Ethical AI And Also AI Law

The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.

I want to make abundantly sure that we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Be very careful of anthropomorphizing today’s AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.

In the AI Bill of Rights, there are five keystone categories:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration, and fallback

I’ve carefully reviewed those precepts, see the link here.

Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of AI Ethics and the burnout of AI Ethics workers (and AI Law workers too).

AI Ethics Workers Are Going Into Burnout Mode

Let’s start with a smiley face.

Working in AI Ethics can be a wonderful thing to do. You are potentially helping to ensure that AI is getting devised that will be reasonably fair and balanced. This is good. This is goodness being spread around. There is something exhilarating about being able to go to work and know in your heart that your efforts are aimed at bettering the world.

Not many people can readily say the same in quite the same overt manner.

It is exceedingly important work. The chances are that AI developers might proceed apace without giving determined considerations to Ethical AI precepts in absence of someone to inform them accordingly. There are many AI developers that simply are unaware of Ethical AI aspects. Some AI developers are so focused on the heads-down joys of devising advanced AI that AI Ethics considerations are not top of mind. In fact, beyond the top of mind, such AI hardcore devisers aren’t particularly of a devoted mind on the topic at all.

Into this void step the heroic AI Ethics worker.

They try to find out what AI is being devised by the firm and then showcase where AI Ethics comes into the picture. While the AI is being crafted, the AI Ethics worker serves as an ongoing reminder of Ethical AI precepts, without which the AI effort is likely to be entirely consumed with the technical nuts-and-bolts issues and put little stock into the societal or ethical implications of the bits and bytes. Upon the AI getting fielded, the AI Ethics worker continues to valiantly hold aloft the Ethical AI flag, making sure that even once the AI is in production that the realization of going AI Ethics a kilter is still real and possible.

Score a gigantic plus one for the AI Ethics worker!

We must now though shift our gaze to the sad face side of things.

There can be so many AI projects going on that the AI Ethics worker barely has time to go from one to the other to keep them apprised of Ethical AI aspects. Also, AI project managers might perceive the AI Ethics infusion as a sour roadblock to getting the AI system done on time and into production. Maybe shave on the AI Ethics stuff right now, some project leaders say, and we’ll get to it later on. The hope is that this will keep the AI Ethics monkeying off their back and they can proceed without the seeming annoyances thereof.

An AI project leader might be erstwhile in believing that the AI Ethics stuff can wait. I say this because there is also the chance that the AI manager is just discarding the Ethical AI as a bunch of excess blarney. You have to tread carefully to discern what such an AI leader is really all about. Do they genuinely support the Ethical AI banner or are they secretly against it? Of course, sometimes the secret isn’t much of a secret and there are AI leaders that badmouth outright the Ethical AI considerations.

I assume that you are beginning to see why AI Ethics worker burnout is likely to be somewhat more pronounced than that of other AI workers in the mix.

You see, an AI Ethics specialist is at times denigrated by others inside and outside of AI. Outsiders are bound to question what an AI Ethics specialist does. Is that a real job, they might ask? Do you do anything useful? Are you merely overhead? Etc.

Meanwhile, in a double whammy, those within AI are often on the attack against AI Ethics specialists too. You don’t know anything about AI and are just some phony philosopher or fancy sociologist, some AI diehards proclaim. Another perspective is that though having AI Ethics workers is potentially handy, turning Ethical AI ideals into practical and usable practices is often left to the imagination. Having someone urging you to code in an Ethical AI way is nice to know, but how that is done in actual programming practice can require some reasoned elbow grease. For my coverage of how to bridge this AI Ethics theory gap and land into AI Ethics practicalities, see the link here.

The odds are that AI Ethics specialists are going to confront hurdles at every turn. The moment you get brought into a firm, you might be lucky and get a welcoming reception, or you might get the cold shoulder. A perception could already exist that you are going to make life tougher on the AI teams and they already feel heavy burdens to begin with. The individual that is the AI Ethics specialist isn’t the issue, it is the very idea of having to contend with the proclaimed Ethical AI precepts that can be seemingly impossible to turn into reality.

AI Ethics worker burnout is bubbling to the surface.

In a recent article in the MIT Technology Review, it was noted that companies are increasingly realizing the need for AI Ethics workers: “Companies are under increasing pressure from regulators and activists to ensure that their AI products are developed in a way that mitigates any potential harms before they are released. In response, they have invested in teams that evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed” (“Responsible AI Has A Burnout Problem” by Melissa Heikkilä, October 28, 2022).

Furthermore, the nature of the work consists of: “The role of an AI ethicist or someone in a responsible-AI team varies widely, ranging from analyzing the societal effects of AI systems to developing responsible strategies and policies to fixing technical issues. Typically, these workers are also tasked with coming up with ways to mitigate AI harms, from algorithms that spread hate speech to systems that allocate things like housing and benefits in a discriminatory way to the spread of graphic and violent images and language” (ibid).

And the kicker is: “But there are plenty of challenges. Organizations place huge pressure on individuals to fix big, systemic problems without proper support, while they often face a near-constant barrage of aggressive criticism online” (ibid).

The gist is that the excitement of doing AI Ethics work is tempered by the urgency and the vast amount of Ethical AI work to be accomplished. Plus, toss in that the work can be undervalued by those within the firm, and unfortunately equally undervalued by those outside the firm.

AI Ethics workers have to ask themselves some pretty brutally honest questions:

  • Do I have the fortitude to withstand the internal and external undermining that can exist?
  • How will this role impact my career such that I can be on an upward path and not a dead-end?
  • Will I feel a sense of imposter syndrome among the AI techies that I work with?
  • What if everything I do at work is ignored and rejected?
  • And so on.

In consulting with firms on their AI Ethics and AI Law related endeavors, I often discover that those workers taking on these vital Ethical AI roles are typically frustrated, concerned, worried, upset, and yet still have a spark of earnest belief in what they are trying to attain. It takes a special kind of person to remain committed to Ethical AI after having been dragged through some quite nasty AI project tribulations.

As mentioned earlier herein, the nature of the firm and what it perceives about Ethical AI is a huge determiner of what will take place related to AI Ethics workers, including these use case examples:

  • AI Ethics – Don’t Know. Firms that don’t know about AI Ethics and are totally in the dark on such matters
  • AI Ethics – Don’t Care. Firms that know about AI Ethics but have decided they don’t care and ergo aren’t going to do anything about it
  • AI Ethics – Lip Service. Firms that know about AI Ethics, care about it, but then give only marginal lip service to it
  • AI Ethics – Unintentional Mistakes. Firms that know about AI Ethics, care, are serious, but make all kinds of unintentional mistakes about it
  • AI Ethics – Shoe Stringer. Firms that know about AI Ethics, care, are serious, but decide to take a shoestring approach to it
  • AI Ethics – Radicalizer. Firms that know about AI Ethics, care, are serious, but go overboard and radicalize the matter
  • Other

Let’s briefly examine those use cases.

For the first bulleted point (“AI Ethics – Don’t Know”), if you go into a firm that doesn’t know about AI Ethics, the upside is that you have the possibility of starting something anew and doing things right from the get-go. On the other hand, and I’m sorry to say this, it could be that once the firm gets a taste of AI Ethics, it might react adversely. You are now possibly immersed in something that is going to go downhill. Your initial enthusiasm is likely to be crushed.

Consider the second bulleted item (“AI Ethics – Don’t Care”). If you step into a firm that already knows about AI Ethics and has decided they aren’t going to do anything about it, heaven help you. You are bound to wonder why the firm brought you in. It makes little sense to have taken such an action. One possibility is that someone lower down in the organization has a glimmer of hope that Ethical AI can be brought into the fold. A nice idea, but one that without top-level support is going to undoubtedly get sorely hammered. Bringing a bright daisy into a forest of mushrooms would seem an unlikely way to redo the forest bed.

Take a look at the other bulleted items and there are tradeoffs for AI Ethics workers in each of those business case scenarios. You might be able to magically and miraculously turn things around, or the company will grind away at you until you have little energy and AI Ethics ambition left in your innards.

How can we detect whether someone is approaching the AI Ethics worker burnout limit?

Generally, research on burnout suggests that there are three crucial elements that edge into appearance while on the path toward worker burnout (see “Beating Burnout” by Monique Valcour, Harvard Business Review, November 2016):

  • “Exhaustion is the central symptom of burnout. It comprises profound physical, cognitive, and emotional fatigue that undermines people’s ability to work effectively and feel positive about what they’re doing.”
  • “Cynicism, also called depersonalization, represents an erosion of engagement. It is essentially a way of distancing yourself psychologically from your work. Instead of feeling invested in your assignments, projects, colleagues, customers, and other collaborators, you feel detached, negative, even callous.”
  • “Inefficacy refers to feelings of incompetence and a lack of achievement and productivity. People with this symptom of burnout feel their skills slipping and worry that they won’t be able to succeed in certain situations or accomplish certain tasks.”

In the context herein, consider the three elements in the particular realm of AI Ethics workers, consisting of:

  • AI Ethics exhaustion: An AI Ethics worker becomes physically, cognitively, and emotionally exhausted at work
  • AI Ethics cynicism: An AI Ethics worker descends into abject cynicism about Ethical AI and AI in general, essentially giving up on their prior principles or doubting them
  • AI Ethics inefficacy: An AI Ethics worker believes that their work is useless, purposeless, valueless, and becomes self-doubting about themselves

Please keep those elements in mind when either taking on the role of being an AI Ethics specialist or when you are working with or managing those that are AI Ethics specialists. Early signs can be a significant way to stave off further erosion and realize what is happening in your workforce.

Another useful tool related to detecting worker burnout entails the claim that there are rather distinct stages of burnout. Not everyone agrees as to how many distinct stages there are, ranging from perhaps three stages to maybe ten. You are welcome to use whichever burnout methodological scheme you think is worthy.

Let’s consider this scheme or framework that contains five stages of worker burnout as mentioned in the Fast Company publication: “There are five stages of burnout that individuals and organizations must assess and then take action to mitigate the progressively worsening symptoms. Analyzing burnout risks is important in all companies, but especially for those in high-risk industries like construction, manufacturing, hospitality, and transportation where the resulting diminished self-efficacy, decision-making ability, and lapses in judgment can be fatal” (“How To Manage Each Of The 5 Stages Of Burnout” by Princess Castleberry, April 13, 2022).

The five stages or phases are (ibid):

  • Phase 1: The Honeymoon Phase
  • Phase 2: Onset Of Stress
  • Phase 3: Chronic Stress
  • Phase 4: True Burnout
  • Phase 5: Habitual Burnout

I’m guessing that you’ve seen or experienced the first phase or stage, the Honeymoon Phase. When an AI Ethics worker initially comes into a company, the usual situation is that all seems bright and cheery. The passion and verve about seeking to bring Ethical AI into a company is an adrenaline rush. That might continue, or more than likely it will hit the wall upon a realization that a humongous uphill Ethical AI battle is hovering over you.

Soon enough, there is a decided onset of stress (phase 2). This can become chronic stress (phase 3). Out of this is the so-called “true burnout” such that it isn’t just transitory or temporary (phase 4). Finally, if the burnout is unrelenting, you find yourself facing habitual burnout (phase 5).

We might then recast five stages or phases into an AI Ethics context:

  • AI Ethics Burnout Stage 1: Honeymoon Happy Phase
  • AI Ethics Burnout Stage 2: Onset Of Moderate Stress
  • AI Ethics Burnout Stage 3: Onset Of Chronic Stress
  • AI Ethics Burnout Stage 4: Advent Of Pervasive Burnout
  • AI Ethics Burnout Stage 5: Adverse Habitual Burnout

What are we to do about all of this?

Dealing with AI Ethics worker burnout is going to be a difficult nut to crack, but we can all try.

My suggestions are:

  • First, organizations need to realize the value that Ethical AI provides and that utilizing AI Ethics workers is essential to that mission and vision.
  • Second, top leaders need to understand and establish suitable conditions for Ethical AI to be part of their company culture and standing, along with heralding appropriately the AI Ethics workers that aid in that quest.
  • Third, the firm needs to ensure that they have the right kind of AI Ethics worker talent and the suitable amount of such talent for the nature and magnitude of the AI efforts underway in their organization.
  • Fourth, we need to get those AI insiders that do not yet realize the value of Ethical AI to open their eyes, including valuing too the AI Ethics workers who aid in Ethical AI pursuits.
  • Fifth, we must educate and instill in outsiders that AI Ethics has to be a key priority for firms that make and use AI. This will hopefully spur firms to take seriously their responsibilities toward Ethical AI, along with being serious about hiring and suitably employing AI Ethics workers.

Those are some of the steps, among others, that could aid in coping with AI Ethics worker burnout, and ostensibly are all sensibly vital to the overarching aims of ensuring that we have AI that abides by Ethical AI precepts.


AI Ethics worker burnout is happening.

My suggestion is that when you hire or make use of AI Ethics (and AI Law) workers, you should be noticing whether a semblance of AI Ethics exhaustion, cynicism, and inefficacy seem to be settling in, and then also seek to ascertain whether a potential progression into the five stages or phases of worker burnout is occurring (note: please seek out trained HR professionals to aid you in these quite consequential matters).

I’m not implying that just because some of those elements are present that you need to loudly bang the drums of burnout. It seems apparent that all workers are going to express some of those symptoms in mild ways at various times throughout their work. The notion is to be looking for more pronounced expressions that seem to be gaining traction and usurping the capacities of the worker.

I mentioned toward the start of this discussion that some really bad trends could emerge if the AI Ethics worker burnout procession continues unabated.

Here’s what I mean.

If those tasked with infusing AI Ethics into organizations get burned out, they might “give up” and decide that the fight isn’t worth the toll. In a sense, they could do a form of quiet quitting, see my coverage on this at the link here.

A company that has burned out “give up” AI Ethics workers will perhaps falsely believe they are doing what is needed to ensure Ethical AI, meanwhile, the reality is that the burnout has caused a lack of assertion toward Ethical AI. Step by step, the AI being devised is likely to veer further and further away from AI Ethics precepts. This in turn will eventually haunt the firm, based on damages or harms that the AI might cause to others.

That is the adverse outcome of the silent treatment of AI Ethics worker burnout.

Then there is the outcry reaction of AI Ethics worker burnout. Namely, a firm might get lambasted in the news or social media for what it is doing to the AI Ethics workers. Top leaders might find themselves in a tough spot. Perhaps the AI they are hoping to roll out is now being criticized as rife with Ethical AI problems. If the rollout proceeds, the firm “knew” that damages and harm would arise. Not a pretty picture, if you know what I mean.

By and large, the more that we see a widening trend of AI Ethics worker burnout, the chances are that few will want to take on such roles. Firms might not care in the sense that they blindly think it is just a sign that AI Ethics workers aren’t needed. Good, the firms think, no need to hire and deal with them. Seems like nobody wants to do the job anyway.

Like a slippery slope, the efforts to date to try and increase awareness of AI Ethics will get eroded. You can bet your bottom dollar that we are going to already have plenty of AI For Bad, and if the AI Ethics worker burnout suppresses or makes AI Ethics into a no-win role, a huge wallop of AI For Bad is heading our way.

The old saying says it all, you can pay me now or pay me later.

That’s a classic refrain and an enduring principle. Either recognize the AI Ethics worker burnout issues now, and let’s take appropriate action to alleviate it, or let it fester. The festering will lead to rather ugly and unpleasant results.

Join me in aiming to catch AI Ethics worker burnout early on, doing so for the sake of the workers, the sake of the companies, the sake of AI, and the sake of society and the globe as we find ourselves becoming totally mired in AI.