This content is sponsored by Microsoft

Sponsored by Microsoft

This content was produced by Boston Globe Media's Studio/B in collaboration with the advertiser. The news and editorial departments of The Boston Globe had no role in its production or display.

The digital healthcare revolution: Do we trust ourselves to build AI for good?

More than a decade ago, the cover of the Journal of American Medical Association (JAMA) showed a picture that a little girl drew for her doctor. In crayon, she had drawn herself, her older sister, and her mother holding her baby sister in the treatment room. The doctor’s back is turned away from them as he sits at his desk typing on his computer. Everyone in the picture is smiling. Even the doctor clicking away at his laptop. It was titled “The Cost of Technology.”

For many of us who worked in healthcare technology at the time, that image will be forever seared into our minds. Research and studies had already begun documenting the side effects of the digitalization of health care, and the honesty of that drawing was a wake-up call. It was a stark reminder that in our efforts to do the right thing — in this case, bring about a much-needed digital transformation in health care — there were very real and very important unintended consequences.

As more regulations were rolled out over the last 10 years, the burden placed upon doctors, nurses, and radiologists mounted, and the price of the cloud-to-digital-to-AI transformation often came at a high cost to those who had dedicated their lives to caring for others, manifesting in rising job dissatisfaction and burnout rates, increasing staffing shortages as clinicians left the workforce, and continued erosion of doctor-patient connection. These trends have continued to rise year after year — exacerbated by the pandemic — and, if I’m honest, despite our best efforts, technology has always seemed one step behind in being able to fully restore the joy of caring for patients while simultaneously providing a more connected and digital experience.

Then, everything changed.

The introduction of GPT-4 in 2023 revolutionized how we think about AI, and it became crystal clear that we had the chance to truly transform the healthcare and life sciences ecosystem by bringing together disparate data sets, and using them to build less intrusive, more embedded, better applications that improve the way care is provided and delivered.

As is the case with all new technological advancements, there are both champions and skeptics of AI in health care. And, while it is essential that we continue to apply a healthy and critical lens to AI, it is also important to remember that AI is a tool. It is neither inherently good nor bad; rather, its nature depends on the application, the data sets powering it, and, importantly, the people and organizations behind its development.

I speak with healthcare customers and partners every day and the most common AI questions I get asked are always around: how to bring data to the cloud securely, how to leverage AI to create value and deliver better experiences for clinicians and patients, and how to ensure that the AI is being used responsibly. The common denominator underpinning all those questions is trust.

Security is everything

advertisement

According to a recent study, 60 percent of healthcare organizations were attacked with ransomware last year, with the cost of each data breach averaging $10.93 million. These events compromise patient safety and the quality of care, and disrupt basic health system operations.  

Ensuring patients’ access to health care and maintaining the privacy of their medical data is paramount to any healthcare organization. The healthcare and life sciences industry is heavily regulated with privacy and security compliance measures, and there are numerous activities, ranging from two-factor authentication to audit logs that help track compliance and mitigate threats, as well as mature compliance frameworks such as HITRUST that are designed to help keep data secure. In fact, the breach rate of HITRUST certified environments in 2022 and 2023 was only 0.6 percent (99.4 percent reported no breach), which is drastically lower than healthcare industry averages.  

In light of these increased attacks, there has been a lot of great work done to provide healthcare and life science organizations with guidelines and actions they can take to fortify their security posture, including conducting regular readiness assessments to identify and address gaps with security solutions, and ensuring they are certified and publishing their compliance insight reports. Organizations that stay informed on the latest industry best practices and take proactive steps can help protect patients and their data, which is paramount to maintaining their trust in AI-powered solutions.

If AI is not helping, it’s not useful

Another critical element in developing trust in AI is tied to its ability to be useful. Creating technology for technology’s sake is a non-starter, so if AI isn’t delivering real, tangible benefits, it’s useless. In health care, this means creating AI-powered tools for specific use cases with a clear understanding of the value it will deliver and, importantly, a way to measure and quantify that value.  

Despite that picture on the cover of JAMA with the smiling doctor typing away with his back to his patients, physician satisfaction has been steadily declining for years. This problem, however, is now yielding thanks to the specific application of AI called ambient intelligence. According to recent surveys, ambient technology can cut the time a doctor spends on documentation in half, and — in turn — reduce feelings of burnout and fatigue by 70 percent.  

Unfortunately, the same documentation burdens have affected nurses. According to a McKinsey study, 32 percent of nurses planning to exit the US workforce this year, citing insufficient staffing levels, not feeling listened to or supported at work, and the emotional toll of the job, as some of the top reasons; and the World Health Organization is predicting a shortage of 4.5 million nurses by 2030.  Recently, a new AI solution that ambiently captures observations and automatically populates flowsheets for nurses is positively transforming complex nursing workflows. This type of AI applied to the automation of documentation across multiple workflows, allows nurses to be at their patients’ bedside, applying their clinical judgment and expertise. In fact, nurses would like to see more AI tools incorporated into their work to alleviate their workload burdens according to a recent McKinsey survey.

Another area where a lot of meaningful work has been done with cutting-edge AI applications is in diagnostic imaging and the development of treatment plans. Because of its ability to analyze vast amounts of medical data quickly, AI has been proven to identify patterns and provide insights that might not be visible to the human eye. For instance, Mass General Brigham AI is co-developing intelligent applications that are reimagining how their teams practice medicine. One initiative is researching state-of-the-art AI models for medical imaging that radiologists can use to identify abnormalities. Another has empowered Mass General Brigham AI to define a unique diagnostics workflow that has significantly reduced the waiting period for important results — for example, turning around breast density tests in 15 minutes rather than several days. 

Similarly, Providence Health is developing foundational AI models in pathology with the goal of driving significant advancements in cancer research and diagnostics. In no way are these models meant to replace the expertise of pathologists, researchers, and doctors, but instead provide the care team with real-time insights pulled from a variety of multi-modal data sets — all aimed at providing better patient outcomes.  

Using AI responsibly 

advertisement

Despite these significant advancements in AI, the technology must be deployed and utilized properly to ensure its effectiveness and trustworthiness. Operating effectively within the healthcare industry requires a set of consistent responsible AI standards that provide confidence to caregivers, administrators, and patients, which means that those who build AI-powered tools need to share a common, validated framework for security, privacy, and safe use. 

Organizations deploying AI need to understand what is in their data, and ensure their applications consider key tenets such as fairness, reliability and safety, privacy and security, inclusiveness, and transparency. There is already a lot of good work being done in this space, including the Coalition for Health AI (CHAI), the Trustworthy & Responsible AI Network (TRAIN), and the Health Management Academy AI Collaborative.  

One thing I’ve learned over the course of my career is that everyone who chooses to work in health care has a very personal reason for doing so. That is a galvanizing and motivating force, and we can — and should — use our different experiences and perspectives to continue to inform and evolve how we think about and advance the state-of-the-art in AI to deliver better outcomes across the entire health ecosystem. After all, it is a tool that is shaped and wielded by people, and the applications are built by and for people. I believe that all of us — technologists, partners, leaders across the healthcare and life sciences ecosystem, and government regulators — working together to uphold the highest responsible AI standards will result in trustworthy applications that deliver high-quality outcomes.  

advertisement

This content was produced by Boston Globe Media's Studio/B in collaboration with the advertiser. The news and editorial departments of The Boston Globe had no role in its production or display.