This content is provided by Sanofi US

Provided by Sanofi US

This content was written by the advertiser and edited by Studio/B to uphold The Boston Globe's content standards. The news and editorial departments of The Boston Globe had no role in its writing, production, or display.

Responsible innovation in the age of artificial intelligence

How Sanofi is setting the standard for AI in the biopharma industry.

The marriage of artificial intelligence (AI) and health care is no longer science fiction — it’s reality. From diagnosing diseases to predicting treatment responses, AI algorithms are ushering in a new era of how the healthcare system supports a patient’s treatment journey. However, with this immense power comes the critical responsibility of ensuring AI is used ethically and responsibly throughout the biopharma industry.

The promise and peril of AI

The potential benefits are undeniable. AI can analyze vast datasets to identify promising drug targets, predict patient responses to therapies, and personalize treatment plans. This could help lead to accelerated drug discovery, higher success rates in clinical trials, and a faster path to market for new treatments. 

But the potential for harm lurks if AI is not developed and deployed responsibly. Biases in the data used to train AI algorithms can lead to unintended discriminatory outcomes, overlooking or putting certain patient populations at a disadvantage. Another concern is the potential for AI to be a “black box,” where the decision-making process that informs a result is opaque. Without transparency, it’s difficult to understand why an AI platform recommends a particular course of treatment, which could erode trust from both doctors and patients.

So, what is being done to ensure AI is being used for good in health care?

Bringing AI to scale with proper governance and a culture of accountability

Sanofi, a global healthcare company, has been very vocal about its intentions to be the first biopharma company powered by AI at scale and their approach is unique.

“We’re all in on AI,” says Jennifer L. Wong, global head, Strategy & Business Transformation, Digital Data & AI at Sanofi, who is leading programs to scale responsible AI and generative AI. “However, we understand the importance of responsible development and use of these tools. Our approach focuses on ensuring AI is fair, ethical, transparent, accountable and that a human is always at the helm and in the loop — our people must be involved. But this doesn’t fall on just one team; we have built a multi-stakeholder coalition of diverse voices, perspectives, and expertise, which are essential to enabling this journey. It’s a company wide effort that is the shared responsibility of each employee as we bring AI to scale.”

To achieve this, Sanofi has implemented robust Responsible AI Guiding Principles that guide employee decisions through design, development, deployment, and use. By taking a thoughtful approach, the company is driving innovation while prioritizing five key principles for AI systems to ensure they are fair and ethical, robust and safe, transparent and explainable, eco-responsible, and accountable for outcomes.

But these guiding principles are not just for those on digital teams writing code. Sanofi’s blanket approach and ambition to implement AI at scale reaches everyone in the company.

“AI has become the fabric of everything we do at Sanofi, so it’s important that every employee understands how to use it — and use it responsibly,” Wong says. “By launching an internally focused ‘I’ in AI campaign this past April with multiple learning pathways, we offered every employee an opportunity to upskill on our AI tools and understand responsible ways to use them. These tools include plai, our AI app developed with AI platform company Aily Labs, which delivers real-time, predictive data interactions and gives an unprecedented 360-degree view across all Sanofi activities.”  

advertisement

Meanwhile, Massachusetts’ Governor Maura Healey has taken note of the increasing use of AI and wants to make sure Massachusetts is a leader in the use of the technology. That is why earlier this year she established the Artificial Intelligence Strategic Task Force to study AI and generative AI technology and its impact on the state, private businesses, higher education institutions, and constituents.

“Sanofi’s belief in AI innovation and commitment to responsible use of AI aligns with the goals of the Massachusetts AI Strategic Task Force,” says Tanisha M. Sullivan, head of External Engagement, US Health Equity Strategy, and Massachusetts Government Affairs at Sanofi. “Responsible AI adoption is critical to the future success of AI, especially within health care where we want this technology to mitigate — rather than further exacerbate — existing bias within the system. Our representation in the Life Sciences working group for this Task Force reflects our commitment to ensuring that AI is used for good through the life sciences ecosystem.”

The promise of AI in biopharma is real. By prioritizing ethics and transparency, AI can become a force for good, accelerating the delivery of life-saving treatments to patients who need them.

This content was written by the advertiser and edited by Studio/B to uphold The Boston Globe's content standards. The news and editorial departments of The Boston Globe had no role in its writing, production, or display.