Tom Lawry • January 9, 2024

Generative AI and Precision Medicine – The Future is Not What It Used to Be

“When we look back in (the year) 2041, we will likely see healthcare as the industry most transformed by AI." 

                                                                                                                                           - Kai-Fu Lee, AI 2041


Generative AI is a new and rapidly emerging form of artificial intelligence that has the potential to revolutionize precision medicine by improving diagnosis, treatment, and drug discovery. It’s comprised of Large Language Models and other intelligent systems that replicate a human's ability to create text, images, music, video, computer code, and more.


So, naturally, when Damian Doherty, Editor-in-Chief of Inside Precision Medicine, approached me last fall about writing an article on Generative AI, the first thing I did was ask the latest version of ChatGPT to provide a 2,800-word manuscript on the opportunities and issues of its application to precision medicine.


The content it generated was relevant, logically organized, and backed up with factual information. Sentence structures were precise and delivered in an easy-to-understand format. There was a formulaic beginning, middle, and end, with the correct provisos stated for being wrong.

The result was quite good, but in the end, it was a little too GPT-ish. There were many things my human brain wanted to know that it didn’t cover or guide me towards.


In some ways, this exercise mirrors the deeper discussions and explorations that are just getting underway to both understand our new and evolving AI capabilities and define a logical pathway to help clinicians and researchers make the practice of medicine more precise.

I’ve had the benefit of working with the application of AI in health and medicine for over a decade. Here are my very human thoughts on what should be considered as we approach this opportunity.


An AI Taxonomy


Generative AI is a relatively new form of AI that has been released into the wild. As such, there are very few experts. This means that we are all early in the journey of understanding what it is and how we apply it to do good.

The chart below provides a simple taxonomy to help differentiate generative AI from other forms of Predictive Analytics.

While there is a great deal of hype over generative AI, there is a growing body of evidence on the things it can do well with humans in the loop:[i]

•       Write clinical notes in standard formats such as SOAP (subjective, objective assessment and plan)

•       Assign medical codes such as CPT and ICD-10

•       Generate plausible and evidence-based hypotheses

•       Interpret complex laboratory results

Going forward generative AI will provide benefits in many areas including:


Drug Discovery and Development: Assistance in the discovery of new drugs and their development by predicting molecular structures, simulating drug interactions, and identifying potential drug candidates more quickly and accurately. AI can identify existing drugs that could be repurposed for new therapeutic uses, potentially speeding up the drug development process and reducing costs.


Personalized Treatment Plans: Analyze large-scale patient data, including genetic information, medical records, and imaging data, to guide physicians in the creation of personalized treatment plans tailored to an individual's unique genetic makeup and health profile.


Disease Diagnosis: Assistance in the early and accurate diagnosis of diseases by analyzing medical images, genomic data, and clinical records, helping healthcare professionals make more informed decisions.

 

Medicine has Been Here Before – Change is Hard


Since medicine came out of the shadows and into the light as a data-driven, scientific discipline we’ve always aspired to be better. The reality is that change is hard. It requires us to think and act differently.


When cholera was raging through London in the 1850’s Dr. John Snow was initially rebuffed when he challenged the medical establishment by gathering and presenting data demonstrating that the root cause of cholera was polluted water rather than the prevailing view that it was caused by bad air. From this came the early stages of epidemiology.[ii]

 

In the 1970’s the introduction of endoscopy into surgical practice was met with resistance in the surgical community which saw little use for “key-hole” surgery as the prevailing view and practice was that large problems required large incisions. Today, the laparoscopic revolution is seen as one of the biggest breakthroughs in contemporary medical history.[iii]

 

Generative AI and Large Language Models are part of medicine’s next frontier. They are already challenging current practices across the spectrum of research, clinical trials, medical and nursing school curricula, and the front-line practice of medicine. It’s not a matter of whether it will affect what you do but rather how and when.

 

With the right dialogue and guidance from a diverse set of stakeholders, we will create a path forward that leverages the benefits of our evolving creations to improve health and medical practices while ensuring that appropriate guardrails are put in place to monitor and guide its use.

 

It’s Not About Going Slow. It’s About Getting Things Right


In some ways, the challenge of generative AI today is less about increased AI capabilities and more about the velocity of change it is driving.

Generative AI came screaming into mainstream consciousness in the fall of 2022. ChatGPT, a generative AI product from OpenAI, racked up 100 million users in two months. In the history of humans, there has never been a product that has seen such rapid adoption. Shortly after ChatGPT reached this milestone the next version of GPT was released with greatly increased capabilities.


From the practice of medicine to the development of new drugs, generative AI’s “speed of progress” is not following the normal path that economists refer to as linear growth.  This is where something new is created that adds incremental value, which then creates a small gap between the time of its creation and when it starts being used. As adoption occurs there is another small gap between uptake and the time it takes for policymakers to develop necessary guard rails to both guide its use and safeguard users from risks. Linear growth is steady and predictable and what clinical and operational systems are set up to manage.


Generative AI is upending linear growth. It’s taking a different trajectory that economists call exponential growth. This is where something increases faster as it gets bigger. Most of our systems are not designed to accommodate this dramatic escalation in change. Exponential growth doesn’t last and eventually, the pace of change returns to linear growth. But when it’s happening it feels like the world is inside a tornado.

The European Parliament approved landmark rules for artificial intelligence, known as the EU AI Act which aims to bring generative AI tools under greater restrictions. This includes generative AI developers being required to submit these systems for review before releasing them commercially.[iv]Here in the United States the Biden administration issued an Executive Order last fall to build momentum within federal agencies and the private sector to put better guardrails in place for the use of AI.


The rapid change driven by generative AI has some calling for measures to slow or even suspend AI development to evaluate its impact on humans and society. A petition from the Future of Life Institute was put forward and signed by leaders including Elon Musk calling for a six-month moratorium on AI development.[v]

 

While there is uncertainty in what we are creating and how it should be applied, it is unlikely that any mandates will slow the pace of AI innovation.


Instead of attempting to slow progress, let us expedite the education and dialogue among policymakers, medical and research leaders, and frontline practitioners to chart a course for progress in applying our new intelligent capabilities. These groups are also most relevant to ensuring that a necessary set of laws, regulations, and protocols are in place to safeguard those both providing and receiving health and medical services.


The Creation of Enforceable Responsible AI Principles


Let’s recognize and support the overall good that can come from AI innovation. At the same time, we must be mindful of how our ever-expanding AI capabilities can replicate and even amplify human biases and risks that work against the goal of improving the health and well-being of all citizens.


Prioritizing fairness and inclusion in AI systems is a socio-technical challenge. The speed of progress is spawning a new set of issues for governments and regulators. It’s also challenging us with new ethical considerations in the fields of medical and computer science. Ultimately the question is not only what AI can do, but rather, what AI should do.


While legislators and regulators work on finding common ground, health and medical organizations using AI today should have a defined set of Responsible AI principles in place to guide the development and use of intelligent solutions. Most often, these principles or guidelines are reviewed and approved at the highest level of leadership and incorporated into an organization’s overall approach to Data Governance.


AI in Medicine is Not About Technology. It’s About Empowerment


AI has a PR problem. The narrative in the popular press and professional journals is often negative.  Headlines like “Half of U.S. Jobs Could be Eliminate With AI,” paint a picture of a future work world dominated by what novelist Arthur C. Clarke calls robo-sapiens.[vi] [vii]


It’s no wonder that people are worried. According to a study by the American Psychological Association, the potential impacts that AI could have on the workplace and jobs is now one of the top issues impacting the mental health of workers.[viii]


Generative AI is already impacting today’s workplace and will be the single greatest change affecting the Future of Work in the next decade. It will impact how all work is done. As you let that statement sink in, recognize that the issues to be addressed go beyond productivity. After all, work brings shape and meaning to our lives and is not just about a job or income.


In this regard, there is growing evidence to suggest that AI can increase not only productivity but also job satisfaction.


In a randomized trial using generative AI, 453 college-educated professionals were given a series of writing tasks to complete. Half were given support with ChatGPT. The control group was not given access to Chat GPT. The results showed that the time taken to complete tasks was reduced by 40% among those using this form of generative AI. Beyond increased productivity, those using ChatGPT reported an increase in job satisfaction and a greater sense of optimism. Most importantly, inequality between workers decreased.[ix]


Done right, AI is not about technology. It’s about empowerment. Properly curated, generative AI will help solve one of the most significant challenges facing healthcare - The shortage of human capital.


The effective introduction and use of generative AI in health and medicine enables both cost-cutting automation of routine work and value-adding augmentation of human capabilities. As it and other forms of AI become pervasive in health and medicine, a new intelligent health system will emerge. It will facilitate systems that improve health while delivering greater value. It will provide a more personalized experience for consumers and patients. It will liberate clinicians and restore them to be the caregivers they want to be rather than the data entry clerks we’re turning them into by forcing them to use systems and processes conceived decades ago.


And while generative AI is coming at us fast with much to understand in how we use it, it could not have come at a better time.


The full article written for Inside Precision Medicine may be found at https://www.insideprecisionmedicine.com/topics/precision-medicine/generative-ai-and-precision-medicine-the-future-is-not-what-it-used-to-be/



References Used in This Blog:


[i]   Peter Lee, Carey Goldberg, Isaac Kohane, The AI Revolution in Medicine: GPT-4 and Beyond, Pearson Education, 2023

[ii]  Theodore H. Tulchinsky, MD MPH, John Snow, Cholera, the Broad Street Pump; Waterborne Diseases Then and Now, Case Studies in Public Health, March 30, 2018

[iii] Endoscopic surgery: the history, the pioneers. Litynski GS.World J Surg. 1999 Aug;23(8):745-53. doi: 10.1007/s002689900576.PMID: 10415199

[iv] Ryan Browne, EU lawmakers pass landmark artificial intelligence regulation, CNBC, June 14, 2023, https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html

[v] Pause Giant AI Experiments: An Open Letter, Future of Life Institute, March 22, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[vi] http://business.rchp.com/home-2/half-of-all-jobs-eliminated/

[vii] Arthur C. Clark, Britannica, https://www.britannica.com/biography/Arthur-C-Clarke

[viii] Worries about artificial intelligence, surveillance at work may be connected to poor mental health, American Psychological Association, September 7, 2023, https://www.apa.org/pubs/reports/work-in-america/2023-work-america-ai-monitoring

[ix] Shakked Noy, Whitney Zhang, Experimental evidence on the productivity effects of generative artificial intelligence, Science, July 13, 2023, https://www.science.org/doi/10.1126/science.adh2586


By Tom Lawry February 24, 2026
The body content of your post goes here. To edit this text, click on it and delete this default text and start typing your own or paste your own from a different source.
By Tom Lawry February 20, 2026
Excited to share FINN Partners’ new eBook: Human-First Health Information: How AI, Data, and Innovation Are Rewriting the Future of Care. As healthcare enters a new era of AI-enabled decision-making and data-driven transformation, one principle remains essential: human-first health information. Trusted, accessible, actionable insights that improve outcomes and strengthen patient experience. I contributed a chapter focused on mastering your AI learning journey.  Click on this link to download a free copy. T.
By Tom Lawry February 11, 2026
Had bumper stickers existed in the 1850s, Dr. John Snow might have had one on the back of his carriage that later became popular in the 1960s with the counterculture crowd that read: Subvert the Dominant Paradigm It was 1854, and a deadly cholera outbreak was tearing through London. At the time, the medical establishment believed cholera spread through miasma—a poisonous cloud of bad air. John Snow, an unknown physician who lived in the affected neighborhood, saw something different. As he watched neighbors die, he became convinced the disease wasn’t airborne—it was waterborne. When Snow presented his theory to London’s medical leaders, he was dismissed. But he persisted. Through interviews, careful observation, data tables, and his now-famous map, Snow traced the outbreak to a contaminated water pump. His work helped stop the epidemic—and gave rise to what we now call epidemiology. Healthcare has been here before. Fast-forward to the 1970s. Even with mounting evidence, endoscopic surgery faced strong resistance. Leading surgeons believed “large problems required large incisions.” Minimally invasive “keyhole” surgery was dismissed. Today, endoscopy is recognized as one of the most important breakthroughs in modern medicine. Change is hard—especially in healthcare. Since medicine emerged as a data-driven scientific discipline, progress has depended on leaders willing to challenge prevailing assumptions. Vaccines. Antibiotics. Sanitation. Clean water. Preventive care. None of these advances came from doing more of the same. Standing behind every major leap forward were leaders who shared two traits: They saw problems through a different lens. They were willing to challenge the status quo to make healthcare better. To be clear: thinking differently does not mean ignoring evidence or freestyling in the operating room. Medicine depends on rigor, standards, and proven best practices. But progress happens when we apply that science in new ways—more inclusive, more efficient, and more effective ways. The art and science of thinking differently Steve Jobs famously made “Think Different” a rallying cry, reminding us that the people who change the world are often the ones who see it differently. True innovators connect the unconnected. They combine ideas across disciplines. They don’t just play the game better—they change the game. Yet not all leaders are equal when it comes to innovation. Research shows that successful innovators spend significantly more time deliberately trying to think differently. For many people, this doesn’t come naturally—and it can feel uncomfortable or exhausting. The good news? Thinking differently is a skill, not a gift. Most of our innovation capacity is shaped by environment and practice, not genetics. With repetition, what once felt uncomfortable becomes energizing—and that’s when the best ideas emerge. History is full of reminders that even breakthrough ideas take time to find their true purpose. Early visions for the telephone included using it merely to notify people that a telegraph message had arrived. And now, here we are—with AI. AI has exploded into healthcare and society, driving change at a pace few organizations are prepared for. What works today will feel outdated tomorrow. Leaders who are complacent with the current state of healthcare will be eclipsed by those who think differently, plan creatively, and act with intent. While many health leaders talk about innovating with AI, I’m looking for the misfits—the ones whose ideas make traditionalists uneasy, but who ultimately move health and medicine forward. They’re the ones who’ve always changed the world. Could that be you?  T.
By Tom Lawry February 3, 2026
𝗟𝗮𝘀𝘁 𝘄𝗲𝗲𝗸 𝗶𝗻 𝗝𝗼𝗵𝗮𝗻𝗻𝗲𝘀𝗯𝘂𝗿𝗴, 𝗜 𝗰𝗮𝘂𝗴𝗵𝘁 𝗮 𝗴𝗹𝗶𝗺𝗽𝘀𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗔𝗳𝗿𝗶𝗰𝗮—𝗮𝗻𝗱 𝗶𝘁 𝗱𝗶𝗱𝗻’𝘁 𝗰𝗼𝗺𝗲 𝗳𝗿𝗼𝗺 𝗶𝘁𝘀 𝗯𝗿𝗲𝗮𝘁𝗵𝘁𝗮𝗸𝗶𝗻𝗴 𝗹𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲𝘀 𝗼𝗿 𝘁𝗼𝗱𝗮𝘆’𝘀 𝗹𝗲𝗮𝗱𝗲𝗿𝘀. 𝗜𝘁 𝗰𝗮𝗺𝗲 𝗳𝗿𝗼𝗺 𝗮 𝗻𝗲𝘄 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝘆𝗼𝘂𝗻𝗴, 𝘁𝗲𝗰𝗵-𝘀𝗮𝘃𝘃𝘆 𝗔𝗳𝗿𝗶𝗰𝗮𝗻𝘀 𝗿𝗲𝗮𝗱𝘆 𝘁𝗼 𝗰𝗼𝗱𝗲 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗰𝗵𝗮𝗽𝘁𝗲𝗿 𝗼𝗳 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗶𝗻𝗲𝗻𝘁. AMLD Africa brought together at the University of the Witwatersrand students and emerging tech entrepreneurs from across Africa for four days of learning, networking, and serious thinking about what’s possible. From delivering digital services to some of the world’s most remote, low-resource communities to upskilling Africans at scale and building sustainable solutions with global relevance, the ambition on display was extraordinary. What impressed me most wasn’t just the depth of technical knowledge in the room—it was the hunger. The passion. And the clear sense of responsibility these young Africans feel to use AI and digital tools to improve Africa and, in doing so, improve the world. If this generation is any indication, Africa’s future is not just promising—it’s already being built. To the students and entrepreneurs, I met: 𝗬𝗼𝘂’𝘃𝗲 𝗴𝗼𝘁 𝘁𝗵𝗶𝘀. 𝗚𝗼 𝗼𝘂𝘁 𝗮𝗻𝗱 𝗱𝗼 𝗴𝗿𝗲𝗮𝘁 𝘁𝗵𝗶𝗻𝗴𝘀. T. #AMLDAfrica #ArtificialIntelligence #Africa #Innovation #Leadership #FutureOfWork #AIForGood #AfricaRising
By Tom Lawry January 20, 2026
The body content of your post goes here. To edit this text, click on it and delete this default text and start typing your own or paste your own from a different source.
By Tom Lawry January 13, 2026
There’s a quiet but consequential misunderstanding happening in healthcare right now. Across boardrooms and conference stages, leaders talk about artificial intelligence as if it’s the disruption to manage—the next great differentiator between healthcare organizations. Strategies are framed around AI adoption, governance, and maturity, as though intelligence itself is the “holy grail.” It isn’t. The real disruption didn’t arrive as a technology roadmap or a vendor demo. It walked through the front door, pulled out a phone, and sighed in frustration. Your competition isn’t the health system across town. It’s the experience someone had with Amazon the night before. Healthcare is no longer evaluated against healthcare. It’s evaluated against the rest of a person’s life—and in 2026 that life is increasingly intelligent, mobile, personalized, and relentlessly convenient. That is the shift many healthcare organizations still haven’t fully internalized. Welcome to the era of the Intelligent Health Consumer. I first wrote about the rise of the Intelligent Health Consumer in my 2020 book, AI in Healthcare – The Rise of Intelligent Health Systems. At the time, the idea felt forward-looking. Today, it’s no longer a prediction. It’s simply reality. Consumers don’t wake up thinking about AI. They wake up inside it. Intelligence has become the background hum of daily life—so embedded that it’s almost invisible. A wearable quietly interprets sleep patterns and physiological signals overnight. A bank resolves fraud before anxiety ever has a chance to surface. A travel app predicts price changes with uncanny timing. A streaming service understands mood and preference like a close friend. None of this feels magical anymore. It feels normal. And that distinction matters. Because consumers don’t care how these systems work. They care about how they feel. These experiences remove effort. They anticipate needs. They deliver clarity without demanding attention. They respect time. This is the world people now inhabit—one where intelligence fades into the background and life simply works. And then a consumer enters healthcare. Suddenly, everything slows down. Tasks that should take minutes stretch into days. Answers that should be clear are buried inside portals filled with PDFs, unexplained terminology, and fragmented information. Scheduling feels transactional. Billing feels adversarial. Navigation feels like guesswork rather than guidance. Not because clinicians lack compassion or capability—but because the experience surrounding the extraordinary skills, talents, and hopes of doctors, nurses, and care teams has not kept pace with the intelligence shaping the rest of a consumer’s life. This is where the real gap lives. It’s tempting to say healthcare is falling behind. That framing misses the mark. Clinically, healthcare is advancing at an extraordinary pace. Scientific discovery, diagnostics, therapeutics, and medical expertise continue to accelerate. The problem is that everything outside healthcare is advancing even faster in how it communicates, anticipates, and personalizes. The reference point has moved. Healthcare hasn’t moved with it—yet. Consumers feel that dissonance immediately. They don’t need surveys to tell them something is wrong. They feel it in the friction, the repetition, and the lack of continuity. They feel it when every other industry seems to remember them, but healthcare still asks them to explain themselves from scratch. When that happens, healthcare isn’t compared to another hospital or health plan. It’s compared—often subconsciously—to the best experience they had yesterday. That’s why the Amazon comparison matters. Not because healthcare should behave like retail, but because consumers carry expectations forward. Seamless ordering, proactive communication, and effortless resolution become the baseline. When healthcare falls short of that baseline, it doesn’t feel “complex.” It feels outdated. At this point, many leaders retreat to a familiar refrain: healthcare is different. More regulated. More complex. More consequential. All of that is true—and also beside the point. Consumers don’t experience regulation. They experience interaction. They don’t see complexity. They feel confusion. They don’t care why something is hard. They only know that it is. Here’s the uncomfortable reality: regulation does not require opacity. Complexity does not demand friction. Clinical care may be uniquely serious, but the experience around it does not need to feel uniquely broken. The world has already reset expectations for how organizations communicate, respond, and adapt. Healthcare didn’t opt out of that reset. It simply hasn’t fully acknowledged it. What’s driving this change isn’t AI adoption curves or technology roadmaps. It’s something far more powerful: expectation inflation . For decades, healthcare transformation was driven by reimbursement changes, regulatory pressure, or policy shifts. Today, it’s driven by comparison. Consumers no longer compare hospitals to other hospitals. They compare healthcare to the best experiences they have anywhere in their lives. This is the real disruption. Not AI as a tool—but AI as a trainer of expectations. Once consumers are taught that systems can anticipate, explain, and adapt, anything that doesn’t feels outdated. Any friction feels unnecessary. Any opacity feels like indifference. The Intelligent Health Consumer isn’t waiting for healthcare to catch up. They’re already here, carrying expectations shaped elsewhere. They expect clarity without chasing it. They expect personalization without paperwork. They expect systems that remember, connect, and anticipate. They expect digital experiences that don’t require training manuals or patience. Most importantly, they don’t view healthcare as a special ecosystem with separate rules. They view it as part of life. And life, now, is intelligent. This doesn’t mean healthcare needs to become Amazon or Netflix. It doesn’t mean care should be transactional or superficial. It means healthcare must operate in a world where consumers are trained—every single day—by organizations that remove friction by default. That is the shift. The organizations that succeed over the next decade won’t be defined by the size of their campuses, the number of beds they operate, or even how much technology they deploy. They’ll be defined by how well they use intelligence to make care feel coherent, humane, and responsive. They’ll understand something essential: AI isn’t the story. The consumer is. The Intelligent Health Consumer has arrived. The only remaining question is whether healthcare is willing to meet them where they already are. T.
By Tom Lawry January 6, 2026
It was the fall of 2022 when Large Language Models and Generative AI burst out of research labs and onto Main Street. Since then, every day seems to bring another AI breakthrough that challenges how work gets done. In my role advising organizations on AI strategy and deployments, I see a consistent pattern among healthcare leaders: excitement about what AI could unlock, paired with exhaustion from the volume of noise, pressure, and competing claims. Welcome to 2026. As predictions flood inboxes and social feeds, focused on what AI might do next, I want to ground the conversation in something more useful. Rather than forecasting outcomes, let’s focus on three forces already at work—forces that will determine whether AI delivers real value in healthcare or quietly stalls. Will 2026 be a year of boom, bust, or backlash? The honest answer is yes. Boom: Early Wins—and an AI Arms Race Let’s start with what’s working. Healthcare is seeing real, if narrow, gains from AI: Ambient documentation reducing administrative burden Imaging and pathology tools improving speed and consistency Operational and revenue cycle applications driving incremental efficiency These are not moonshots. They are targeted solutions addressing specific pain points. And they matter. At the same time, healthcare is now firmly in an AI arms race. Every EHR vendor, medical device company, life sciences firm, and digital health startup is racing to declare itself “AI-native.” Roadmaps are packed with copilots, assistants, agents, and automation claims. No vendor wants to be perceived as falling behind. That pressure is accelerating innovation—but it’s also compressing timelines, encouraging over-promising, and pushing organizations to adopt faster than they can realistically absorb. Boom energy is real. But it is also uneven and fragile. Prediction: Within two years, most AI used in provider organizations will arrive embedded inside core systems and devices already in use. Intelligence will not be something teams “add on”; it will be something they inherit. Recommendation: Understand where AI is already embedded across your vendor ecosystem and what’s coming next. Engage early through advisory councils or pilots. Engage and prepare clinicians before these capabilities are introduced into workflows. AI should never arrive as a surprise. Bust: When Pilots Multiply, but Value Doesn’ t Generative AI has dominated innovation agendas, yet only a fraction of pilots ever reach sustained production. A survey cited by MIT reports that roughly 95% of business AI pilots fail to generate measurable returns. This is not evidence that AI lacks value. It is evidence that many organizations lack discipline. High failure rates are normal in early markets. Technology matures. Tools improve. But value only materializes when leaders focus on fundamentals: design, data readiness, workflow integration, and ownership. Most AI initiatives fail not because the technology doesn’t work, but because success is never clearly defined. Projects are launched out of curiosity, vendor pressure, or fear of being left behind. Clinical impact, operational accountability, and economic value are clarified too late—if at all. Equally damaging is the underestimation of the human systems AI enters. Healthcare work is relational, regulated, and trust-dependent. When AI is introduced without redesigning workflows, preparing staff, or clarifying responsibility, it creates friction—not relief. Adoption then stalls quietly. Prediction: In 2026, organizations will run fewer AI pilots—but with much higher expectations. Boards and executives will require clearer evidence of clinical, workforce, or financial value before approving new initiatives. Recommendation: Move from “fail fast” to “fail before you scale.” Define success upfront, assign ownership early, and redesign workflows alongside technology. AI initiatives without a credible path to value should stop quickly . Backlash: Fear, Workforce Anxiety, and the Trust Gap The most underestimated force shaping AI’s trajectory in 2026 isn’t technical or financial. It’s human. History offers context. When automobiles first appeared, they were seen as dangerous and socially disruptive. Red Flag laws required people to walk ahead of vehicles waving flags and capped speeds at just a few miles per hour. These laws weren’t about innovation—they were about fear, control, and adjustment. Healthcare AI is entering a similar phase. Workforce research shows healthcare workers are among the most cautious about AI adoption, citing concerns about trust, transparency, and job impact. This caution is not irrational. Healthcare has a long history of technology being imposed rather than co-designed. As a result, scrutiny is increasing—particularly from labor organizations and state legislators. Recent bills, including those limiting AI’s role in clinical decision-making and licensed practice, reflect not anti-innovation sentiment, but unresolved trust and knowledge gaps. Innovation does not scale without trust. In 2026, AI scrutiny will intensify, especially with labor organizations and at the state legislative level. As I write this, the Chair of the New York State Senate Committee on Internet and Technology just introduced a bill (S7263) to “protect patients and front-line care workers from the adverse effects of AI tools in risky or untested settings.” The bill prohibits chatbots from performing the duties of licensed nurses and puts strong guardrails around the use of AI in healthcare settings.” I often write about the need for a balanced approach to defining both the “gas and guardrails” that guide AI’s use in health and medicine. Incentives and safeguards are equally important. Prediction : Expect increased legislative activity and labor engagement around AI in healthcare throughout 2026. Such actions should not be dismissed simply as anti-innovation. They reflect something deeper: a trust and knowledge gap that needs to be closed. Recommendation: Create durable AI value by investing in workforce and consumer education. Clinicians need clarity—not just on how AI works, but on how it supports professional judgment rather than replaces it. From Awe to Analytical The year ahead will test leadership resolve. Transformation in healthcare is rarely linear—and never clean. Vendors will continue to showcase breakthroughs. The hype will continue. But 2026 is not the year for cheerleading. It is the year for realism. The most effective leaders are moving from awe to analysis—recognizing that AI value does not come from the technology itself, but from the opportunity it creates to rethink how work gets done. In that sense, AI value is—and always will be—a uniquely human process. T.
By Tom Lawry November 18, 2025
Network inaccuracy isn’t an inconvenience — it’s a public health crisis. Four out of five provider directory listings in major health plans are wrong. That bad data drives higher costs, delayed care, and fear for millions of Americans already worried about getting sick. This isn’t a complex problem to fix. It’s a neglected one. Thi s article is worth reading. T.
By Tom Lawry November 11, 2025
In my keynote presentations to healthcare leaders, one of the questions I always pose is this: Is AI part of your organization’s HR plan? When it comes to AI, healthcare leaders often overestimate the challenges of technology and underestimate the challenges of people.  Employee resistance and lack of understanding are among the top reasons AI initiatives fail to deliver on their promise. This isn’t a criticism. It’s an acknowledgement that the single greatest question anyone in the healthcare workforce has is this: What does AI mean to me and my career? That’s why upskilling the healthcare workforce on AI basics isn’t optional—it’s mission-critical. Here are a few essentials every healthcare leader should consider when building AI fluency across their organization: ✅ Start with awareness, not algorithms. Help staff understand what AI is—and isn’t. ✅ Link learning to purpose. Tie AI education to improving care, safety, and patient outcomes. ✅ Tailor training to roles. A nurse, clinician, and administrator each need different levels of literacy. ✅ Make it continuous. AI learning shouldn’t be a one-off workshop—it’s a journey. ✅ Foster psychological safety. Encourage curiosity and open dialogue about change. ✅ Teach responsible AI. Build fluency in bias, privacy, and ethical use. AI readiness is strategic—not optional. And so, is AI part of your HR plan today? How is your organization preparing its workforce for the age of intelligent care? What steps are you taking to turn AI fear into AI fluency? T.
By Tom Lawry November 7, 2025
The Joint Commission and Coalition for Health AI (CHAI) just released the first national guidance (US) on the Responsible Use of AI in Healthcare. This is a practical, flexible framework designed to help health systems of all sizes govern, validate, and monitor AI responsibly—while ensuring patient safety and trust. Coming next: AI governance playbooks and a voluntary AI certification program for more than 22,000 accredited organizations nationwide. This is a significant step forward for provider organizations in the United States seeking a standard, well-vetted approach to responsibly deploying AI. Go here for more information. T.
Show More