I interviewed nine of the most brilliant minds in the AI space. Each of them possesses a unique perspective, vastly different paths to the world of AI, and one-of-a-kind approaches and principles on how AI can be used ethically to bring about positive change. What if we were able to take beliefs, values, best processes, and experiences from each of them to create one singular AI mind? Would that mind create a roadmap leading us to the light we all seek?
The Necessary Parts
I would start with Yevgeniya-Jane-Pinelis, at the Office of Chief, Digital and AI Officer. From here, we begin to build our ultimate AI mind with her experiences in the adoption of rigorous AI testing protocols and AI ethical principles in the Department of Defense. Her tireless advocacy of credible and objective Test and Evaluation and implementation of Responsible AI will ensure we do not abuse the power of AI. Her ideals of building “a virtuous cycle where we have some initial successes really implementing responsible AI technology which will in turn then lead to wider trust and adoption that then gives us even more opportunities to learn and improve our responsible AI infrastructure” gives us the perfect base to begin our build.
Next, from Atti Riazi, CIO of the Memorial Sloan Kettering Cancer Center, we could use her practical, no-nonsense view of AI. She told me that “ the application of technology often entails unintended consequences. We cannot simply incorporate technology without working to understand its long-term impacts. And because these can involve significant social issues, the tech sector needs to partner with governments, NGOs, and civil society for these impacts to be addressed responsibly…I’m a big believer in the power of technology, while I’m pessimistic about some of the consequences that are coming in, I do think that partnership is really critical.”
Now that we have added in the optimism on the tech rooted in common sense how-to, we need experience from a different world. For that, we turn to Linda Leopold and Sol Rashidi. Linda Leopold is the Head of Responsible AI & Data at H&M Group. Her department does everything based on these two goals. To use AI to do good to reach their sustainability goals and work with AI carefully, ensuring they don’t cause unintentional harm. Her dedication in the private sector, traditionally driven by revenue alone, to sustainability and responsible use of AI brings much-needed perspective, skill set, and roadmap to universal AI for Good.
From Sol Rashidi, Chief Analytics Officer at The Estée Lauder Companies Inc., we get absolute fearlessness of innovation rooted in fundamentals. In her career as an AI leader, she has never been afraid of being wrong. She builds teams that master the basics in order to make sustainable innovation possible. Her team’s mantra? “This may not work, but damn it, we’re gonna try.”
Next, we need to master communications between stakeholders that need to be translated. We look to Sumaya Al Hajeri at the Minister Office of AI, where she implemented the AI National Strategy by rolling out several policies and initiatives. She has the ultra-rare ability to put engineers and lawyers at a table and have all parties clearly understand each other. She can explain complex technical concepts to non-technical people. And that is a skill we need in droves.
The world of tech and AI is in constant flux. There is always turbulence, and we need leadership experience through turbulent times. For that, we look to Linda Avery, the strategic voice and visionary for data at Verizon. Throughout her career, she has a proven track record of stellar leadership under less than stellar conditions in the world. She made her vision clear to me;
The guidance needed to navigate (the future) is not going to come from intuition or the business formulas that worked in the past…strategy needs to be driven by data spanning all factors and forces. It’s the CDO’s role to make that possible… A primary part of the job is building relationships and credibility with the business leaders to be willing to change how they think about implementing strategy.
Next, every ultimate mind needs a wild imagination. We turn to self-described Mad Scientist Dr. Vivienne Ming, Chief Scientist of Dionysus Health. From Dr. Ming, we would use her steadfast belief that there is no one solution. When we spoke earlier this year, she stressed that AI doesn’t impact every single person the same, and we must always consider the individual when trying to solve for the whole. She emphasizes that heterogeneity is fundamental and feels it is underappreciated by science. She knows that implementing AI into any societal situation will positively affect some while negatively affecting others.
We now need someone to take all this data and direction to the public, and we will pull from the experiences of Beena Ammanath, executive director of the Global Deloitte AI Institute. She leads the Institute in exploring applied AI innovation across industries, helping organizations richly understand all aspects of the AI lifecycle and the implications for business strategy, risk, and ethics. By assisting organizations in deploying AI grasping the entire AI ecosystem, and uncovering key elements of trustworthy AI, companies are positioned to make the most informed decisions about their AI applications.
Finally, for AI to be understood and accepted, it must be humanized. People of all ages, genders, and races need to be considered when technology is being developed and enhanced. The perfect mind to pull from in this aspect is Andrea Gallego, the CTO at BCG GAMMA. When I spoke with her, she opened my eyes to the fact that we cannot simply charge forward and go wherever the tech leads. We need to be deliberate and thoughtful about its effects on all people. She told me that the study of the person is as important as tech development. We all can get excited about the newest and best tech, but someone like Andrea reminds us that what matters isn’t the tech but how it affects us, humans.