Healthcare is evolving

If medicine is changing, why isn't medical training?

Dr. Ahmed Kerwan, Founder and CEO
March 24, 2026
12 min read

A Harvard medical student recently published an essay explaining why he chose not to apply to residency. His argument was direct: AI is automating the cognitive core of medicine so rapidly that the traditional return on investment of clinical training no longer holds. He described using an AI platform with a real patient and watching it conduct the interview, generate differential diagnoses, pend tests, and draft the note. His role as the physician was to review the output and press approve. He concluded that the window for building a career on clinical expertise alone is closing, and that he would rather spend his time building the systems that will shape what comes next.

The essay provoked strong reactions. Some called it reckless. Others quietly admitted they had been thinking the same thing. Whether you agree with his conclusion or not, the underlying observation deserves to be taken seriously. The tools available to physicians today are categorically different from the tools available five years ago. The tools available five years from now will be categorically different again. And yet the structure of medical education, the curriculum, the competencies, the entire philosophy of what it means to train a doctor, has barely changed at all.

I trained as a physician in the UK. I practiced clinically. I then went to Harvard and MIT to study health management and the intersection of technology and care delivery. I eventually left practice to build Taxo, an AI company that automates healthcare administration. I did not leave because I lost faith in medicine. I left because I saw clearly that the system surrounding medicine was broken in ways that clinical training alone could not fix. The question I keep coming back to is not whether AI will change medicine. That is already settled. The question is whether we will train doctors to lead that change, or whether we will train them to be overtaken by it.

The curriculum was designed for a different era

The basic structure of medical education has not fundamentally changed in over a century. Two years of preclinical sciences, two years of clinical rotations, then three to seven years of residency. The assumption embedded in this design is that the primary value a physician offers is the ability to hold a vast amount of clinical knowledge in their head and deploy it under pressure. That assumption made sense when medical knowledge was scarce, access to information was limited, and the physician was the single point of expertise in the room.

That world no longer exists. A medical student graduating today enters a profession where AI models can already match or exceed board certified radiologists on standardized imaging benchmarks. Where large language models can generate differential diagnoses, draft treatment plans, and synthesize patient records with a depth and speed that no human can replicate. Where the cognitive tasks that once defined physician expertise are increasingly being performed by systems that cost a fraction of what it took to train the human who used to do them.

This does not mean physicians are obsolete. Far from it. It means that the unique value of a physician is shifting. It is moving away from pure knowledge retrieval and pattern recognition, which machines can do, and toward the things machines cannot do: clinical judgment in the face of ambiguity, ethical reasoning, empathetic communication, systems leadership, and the ability to critically evaluate and oversee AI outputs. The problem is that almost none of these skills are systematically taught in medical school or residency.

What needs to change

If we accept that the practice of medicine is being transformed by intelligent systems, then the training of physicians must be transformed as well. Not incrementally. Fundamentally. Here is what I believe that looks like.

First, AI literacy must become a core competency, not an elective. Every medical student should graduate understanding how large language models work, what they can and cannot do, where they fail, and how bias enters clinical decision support systems. This is not about turning doctors into data scientists. It is about ensuring that the people who will be responsible for patient safety in an AI augmented world understand the tools they are working with. The AMA has already begun advocating for AI curriculum across the medical education continuum, calling for training that includes ethics, bias mitigation, and safe workflow integration. But advocacy is not implementation. Most medical schools still treat AI as a novelty rather than a foundational skill.

Second, training must develop the physician as an AI supervisor, not just a clinician. The essay I mentioned earlier described a physician’s role as reviewing an AI’s output and pressing approve. That framing was intended to be provocative, but it actually describes a legitimate and important skill: the ability to critically evaluate machine generated recommendations, catch errors, identify edge cases, and make the final call in situations where the AI is uncertain or wrong. This is not a lesser form of medicine. It is a different form, and it requires deliberate training. Medical education should include structured practice in reviewing AI generated clinical notes, challenging AI suggested diagnoses, and understanding when to trust a model’s output versus when to override it. None of this currently exists in a standard residency program.

Third, physicians need to be trained in systems thinking and healthcare operations. The most impactful physicians of the next generation will not be the ones who memorize the most pathways. They will be the ones who understand how a healthcare system actually works, end to end, and who can identify where intelligent automation creates the most value. That means exposure to workflow design, to revenue cycle management, to how data flows between an EHR and a billing system, to what happens when a referral gets lost or an insurance verification fails. Physicians who understand these operational realities will be equipped to lead the redesign of care delivery. Physicians who do not will be passengers in a system they do not fully understand.

Fourth, medical education should actively cultivate entrepreneurial and innovation skills. The Harvard student who chose not to apply to residency cited the tension between clinical training and the bandwidth to build. He observed that by the time a physician reaches the “and” in “physician and”, whether that is scientist, entrepreneur, or innovator, they are 35, carrying debt, and structurally risk averse. There is truth in that observation, and it points to a design flaw in the system. Medical schools should create pathways for students and residents to engage in real world innovation during training, not after it. Dedicated time for building, for cross disciplinary collaboration, for engaging with the companies and technologies that are actively reshaping healthcare. The best medical innovations are going to come from people who understand both the clinical problem and the technical solution. Training programs should produce those people on purpose, not by accident.

Fifth, communication and ethical reasoning need to move from soft skill to core curriculum. In a world where AI handles much of the cognitive work of diagnosis and treatment selection, the physician’s irreplaceable role becomes the human relationship. Explaining a frightening diagnosis with empathy. Navigating a complex end of life conversation. Helping a patient weigh risks and benefits when the data is ambiguous and the stakes are high. These are not peripheral skills. In the age of AI, they become the center of what it means to be a doctor. And yet most medical programs devote a fraction of their curriculum to communication training and ethical reasoning compared to biochemistry and pharmacology.

The doctors who lead will be the ones who adapt

I want to be clear about something. I am not arguing that clinical training is worthless. I am not arguing that residency serves no purpose. The clinical foundation that medical education provides is essential. Understanding physiology, pathophysiology, and clinical reasoning is the prerequisite for everything else. Without it, you cannot meaningfully supervise an AI’s output, you cannot catch its errors, and you cannot exercise the kind of judgment that patients’ lives depend on.

What I am arguing is that clinical training alone is no longer sufficient. The physician who will thrive in 2035 is not the one who can recall the most facts under pressure. That physician will be outperformed by a model running on a phone. The physician who will thrive is the one who can integrate AI tools into their practice, critically evaluate their outputs, lead operational transformation, communicate with patients in ways no algorithm can replicate, and shape the ethical frameworks that govern how these technologies are deployed.

We are training physicians for a world that is changing under their feet. The students entering medical school this year will not retire for another 40 years. The medicine they will practice in 2060 will bear almost no resemblance to the medicine being taught today. We owe it to them, and to the patients they will serve, to prepare them for that future.

Why this matters to us at Taxo

We build AI that automates healthcare administration. Our platform handles the phone calls, the insurance verifications, the referral processing, the intake workflows, and the follow ups that consume an enormous share of a clinic’s time and energy. We do this because we believe that the people who work in healthcare deserve to spend their time on meaningful work, not paperwork.

But our conviction runs deeper than product. We believe that the relationship between technology and healthcare professionals should be one of partnership, not displacement. Every system we build is designed to augment the people in the building, to give them more capacity, more accuracy, and more time for the human interactions that define great care. That philosophy extends to how we think about the entire future of the profession.

Physicians should not be afraid of AI. They should be trained to command it. They should graduate understanding not just the science of medicine but the systems, the ethics, and the technology that will determine how that science reaches patients. They should enter practice equipped to lead, not just to practice. Because the physicians who understand AI will be the ones who shape it. And the ones who shape it will be the ones who ensure that technology serves patients, not the other way around.

The student who chose not to apply to residency was not wrong to notice that the ground is shifting. Where I would push back is on the conclusion. The answer is not to abandon medical training. The answer is to demand that medical training rise to meet the moment. The physicians of the next generation will be the most powerful in history, if we train them for the world they are actually entering.

Join the Taxo Newsletter

Get the best sent to your inbox, every month

Join the Taxo Newsletter

Get the best sent to your inbox, every month

Join the Taxo Newsletter

Get the best sent to your inbox, every month

Related Articles