Search
Close this search box.

Operationalizing Responsible AI for Enterprises at Scale, Insights from Sreekanth Menon

Responsible AI is not about limiting innovation but can serve as a competitive advantage.

“Trust is earned in drops and lost in buckets.” – Kevin Kelly

In the early 1900s, a glowing promise illuminated the future: radium. This miraculous element found its way into everyday items, most notably watch dials. Young women, enticed by well-paying jobs, flocked to factories where they meticulously painted watch faces with radium-infused paint. Their technique? Licking the brush tips to achieve a fine point, unknowingly ingesting lethal doses of radiation with each stroke.

This was a story brought back from archives by Sreekanth Menon, Global AI/ML Leader at Genpact at MachineCon USA 2024 hosted by AIM Research. 

This tragic chapter in history, known as the “Radium Girls,” serves as a stark reminder of innovation’s double-edged sword. As Sreekanth poignantly noted, these women’s very livelihood was slowly sapping away their life force. The cruel irony? Radium, the very substance causing their demise, would later be used in cancer treatments.

Fast forward to today, and we find ourselves on the cusp of another transformative era: the age of Artificial Intelligence (AI). Like radium before it, AI promises to revolutionize industries and reshape our world. However, the parallels between these two innovations extend beyond their potential for change. Both carry inherent risks that, if left unchecked, could lead to devastating consequences.

Sreekanth Menon drew parallels between this historical event and the current AI revolution. When asked if attendees were actively using AI for decision-making or operational efficiencies, almost everyone raised their hands. However, when questioned about confidence in their organization’s responsible AI governance frameworks and policies, far fewer hands remained up. Even fewer were sure about actively monitoring AI use in day-to-day work.

This disconnect highlights a critical issue in the AI revolution: the gap between adoption and responsible implementation. As Menon astutely observes, AI is akin to a toddler with a smartphone – capable of impressive tricks, but not something you’d entrust with your company credit card.

Implementing responsible AI presents numerous challenges, with key issues including the absence of a universal ethical code for AI, the complexity of addressing diverse social, political, and cultural contexts, the temptation to prioritize speed over safety, and the need for clear, understandable transparency requirements. Specific risks in AI implementation include hallucinations, where language models generate convincing but false answers, bias stemming from training data, and the classic “garbage in, garbage out” problem.

The EU AI Act categorizes AI risks into four levels: unacceptable, high, limited, and minimal. For example, a system influencing consumer voting behavior is classified as an unacceptable risk, while a chatbot falls under limited risk, and a spam filter is considered minimal risk.

To navigate these challenges, collaborative governance across regions is essential. Developing dynamic frameworks with measurable metrics and fostering awareness and competency among both AI developers and consumers are critical steps. Partnerships with private institutions, academia, and governments can help create universally accepted metrics for AI systems.

Translating regulations and theoretical concepts into practical guidelines for AI developers and architects is crucial. A responsible AI strategy should include steps for addressing concept drift, data mitigation plans, fairness, legal compliance, autonomy, privacy, and model security.

Key elements for building a “responsible AI firewall” include ensuring data readiness for AI, encompassing quality, fairness, and proper access rights, and standardizing AI system architecture within enterprises. Collaboration with lawmakers and government bodies is vital, as is promoting a culture of responsibility in AI development and deployment. Providing psychological safety for teams to speak up, share ideas, and learn from mistakes is also fundamental.

Responsible AI is not about limiting innovation but can serve as a competitive advantage. Adhering to responsible AI guidelines can lead to improved customer trust and retention and potentially command premium prices in the marketplace.

In conclusion, Menon draws a parallel between AI and transformative technologies like fire, electricity, and the internet. AI, like its predecessors, has the potential to either “run the world or ruin the world.” The choice, and the responsibility, lies with us to harness its power responsibly and ethically.

Picture of Anshika Mathews
Anshika Mathews
Anshika is an Associate Research Analyst working for the AIM Leaders Council. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!
Join AIM Research's Annual Subscription Today

Unlock Unlimited AI Insights for Just $9999!

50+ AI and data science reports
All new reports for the next 12 months
Full access to GCC Explorer and VendorAI
Stay ahead with cutting-edge insights