The ethical case for AI

Adopting artificial intelligence is not just a business decision. It is a moral one.

Dr. Ahmed Kerwan, Founder and CEO
February 28, 2026
11 min read
How Automation Improves the Patient Journey

Somewhere right now, a patient is being harmed by paperwork. Not by a misdiagnosis or a surgical error or a dangerous drug interaction, but by something far more ordinary and far more pervasive. They are being harmed by a phone call that was never returned. By an insurance authorization that expired while it sat in a queue. By a referral that arrived at a specialist's office three weeks ago and has not yet been processed because the front desk team is underwater and there are only so many hours in the day. The harm is invisible and undramatic. It will not make the evening news. But it is real, it is cumulative, and it is happening at a scale that should trouble the conscience of every healthcare professional alive.

This essay is addressed to practice owners and clinic leaders. It is not a sales pitch for technology. It is an argument about ethics. Specifically, it is an argument that in the year 2026, with the tools now available, choosing not to adopt artificial intelligence in your practice is no longer a neutral act. It is a decision with consequences for your patients, your staff, and the integrity of the care you provide. And those consequences deserve to be examined honestly.

The Erosion No One Talks About

We have grown accustomed to speaking about healthcare quality in clinical terms. We measure outcomes, readmission rates, mortality statistics, infection rates. These are important metrics, and they have driven genuine improvements in the science of medicine. But there is another dimension of care quality that receives almost no serious attention, and it is the dimension that patients feel most acutely in their daily lives: the experience of trying to access the system in the first place.

Ask any patient what it is like to navigate a specialty clinic and you will hear the same stories repeated with depressing consistency. The thirty minute hold time just to schedule an appointment. The intake form that asks for the same information they have already provided three times. The insurance question that nobody at the front desk can answer with certainty. The referral from their primary care physician that seems to have vanished into a void. The follow up call that never comes. The sense, pervasive and demoralizing, that the system was not designed with them in mind.

These are not edge cases. They are the norm. And they are getting worse, not better. Clinic staffing shortages have reached historic levels. The average tenure of a front desk employee in a medical practice has fallen to less than eighteen months. The administrative burden on clinical staff has increased every year for the past decade, with no signs of reversal. The result is a system in which the humans responsible for coordinating patient care are stretched so thin that even the most dedicated among them cannot reliably ensure that every patient receives the attention they deserve.

This is the erosion that no one talks about. Not a sudden collapse, but a slow, grinding degradation of the patient experience that compounds over months and years. A referral processed a day late. A callback that slips to tomorrow, then the next day, then never. An insurance verification that should have taken minutes but took a week because the person responsible was covering for a colleague who quit. Each failure is small. Together, they constitute a crisis.

First, Do No Harm. Then, Do Not Look Away.

Every physician learns the principle of nonmaleficence early in their training. First, do no harm. It is medicine's foundational ethical commitment, and most practitioners understand it intuitively when it comes to clinical decisions. You do not prescribe a drug you know to be dangerous. You do not perform a procedure you are not trained to perform. You do not withhold a treatment that you know would help.

But the principle extends further than the exam room. If you are a practice owner, you are responsible not only for the clinical care your patients receive but for the systems that determine whether they can access that care at all. You are responsible for the infrastructure of their experience. And if that infrastructure is failing, if patients are falling through cracks that could be sealed, if referrals are being lost and calls are going unanswered and follow ups are being missed because your administrative systems cannot keep pace with demand, then harm is occurring on your watch. Not because you are negligent or indifferent, but because the tools you are using are no longer adequate for the world you are operating in.

This is the ethical fulcrum of the argument. We are no longer in an era where the limitations of healthcare administration are immovable constraints, like gravity or the length of a day. We are in an era where technology exists that can answer every patient call, process every referral, verify every insurance eligibility, and ensure that no interaction falls through the cracks. The technology is not theoretical. It is not five years away. It is here, it is proven, and it is accessible. The question is no longer whether it is possible to deliver a higher standard of administrative care. The question is whether you will choose to.

The Patients You Never See

There is a particular cruelty to administrative failure in healthcare, and it lies in its invisibility. When a surgeon makes an error, it is documented, reviewed, and learned from. When a medication causes an adverse reaction, it is reported. But when a patient gives up trying to reach your office after being placed on hold for the fourth time, there is no record of that failure. When a referral sits unprocessed for two weeks and the patient's condition worsens during the wait, there is no incident report. When a non English speaking patient cannot navigate your intake process and simply does not return, there is no metric that captures what was lost.

These are the patients you never see. And they are, in many ways, the patients who need you most. The elderly patient without a family member to advocate on their behalf. The working parent who cannot afford to spend forty five minutes on hold during business hours. The immigrant who speaks Tagalog or Arabic or Mandarin and encounters a system that only functions fluently in English. The chronically ill patient who needs consistent follow up but falls out of the care continuum because the administrative machinery could not keep pace.

Healthcare professionals often speak about equity in terms of access to clinical services. Expanding insurance coverage, building clinics in underserved areas, training more providers. These are vital efforts. But there is another dimension of equity that is rarely discussed, and it is the equity of operational access. It does not matter how many clinicians you employ if patients cannot get through the door. It does not matter how sophisticated your treatments are if the referral never reaches the scheduler's desk. Operational failure is a barrier to care as real and as damaging as any clinical shortage, and it falls disproportionately on the patients least equipped to fight through it.

Artificial intelligence does not get tired. It does not call in sick. It does not put a patient on hold because three other lines are ringing. It speaks over a hundred languages with native fluency. It operates at two in the morning with the same attentiveness it brings at two in the afternoon. For the patients who have been systematically underserved by the limitations of human bandwidth, AI is not a luxury. It is the closest thing to justice the administrative layer of healthcare has ever produced.

Why Caution Is No Longer the Careful Choice

I understand the hesitation. I am a physician. I was trained in a culture that valorizes caution, that distrusts hype, that demands evidence before adoption. These instincts have served medicine well for centuries, and I would never argue that healthcare leaders should embrace technology recklessly. But I want to challenge a specific assumption that I encounter frequently among practice owners, which is the belief that waiting to adopt AI is the conservative and therefore the responsible choice.

It is not. Not anymore.

Caution is the responsible choice when the risks of action outweigh the risks of inaction. When a new drug has not been adequately tested, caution protects patients. When a surgical technique is unproven, caution saves lives. But when the status quo is itself causing harm, when patients are already receiving degraded care because administrative systems are overwhelmed, when staff are already burning out at rates that threaten the viability of the practice, when the evidence that AI can alleviate these problems is already substantial and growing, then caution ceases to be protective. It becomes a form of inertia dressed in the language of prudence.

The question I would ask any practice owner who is waiting is: waiting for what, exactly? For the staffing crisis to resolve itself? It will not. For EHR vendors to suddenly deliver on thirty years of broken promises? They have not and they will not. For a competitor down the street to adopt AI first, capture the referral volume you are losing, and demonstrate by example what you could have done? That is already happening.

Every month of waiting is a month of calls going unanswered. A month of referrals processed late. A month of staff shouldering a burden that technology could lift. A month of patients receiving less than they deserve. The cost of inaction is not zero. It is compounding. And it is being paid by the people who can least afford it.

Embracing AI as a Practice of Medicine

Responsible adoption of AI in healthcare administration is not about replacing your team. It is about giving them room to do the work only humans can do. The call that requires empathy because a patient has just received a difficult diagnosis. The scheduling decision that requires clinical judgment because a case is complex. The conversation with a family member who is frightened and needs reassurance, not efficiency. These are the moments that define exceptional care, and they are precisely the moments that get crowded out when your staff is consumed by tasks a machine could handle.

When I think about what AI should mean for a practice, I think about it the way I think about any other clinical tool. A stethoscope extends the physician's ability to hear. An MRI extends the physician's ability to see. AI extends the practice's ability to be present for its patients at scale, across every channel, in every language, at every hour. It is not a replacement for human care. It is the infrastructure that makes human care sustainable.

The practices that are adopting AI now are not doing so because they are enamored with technology. They are doing so because they looked at the gap between the care they wanted to provide and the care their systems allowed them to deliver, and they decided that gap was no longer acceptable. They are doing so because they understand that in a world where a single intelligent platform can answer every call, process every referral, verify every insurance eligibility, follow up with every patient, and sync every action to the medical record, choosing to operate without that platform is choosing to accept a lower standard of care.

That is not a technology statement. It is an ethical one.

Access as a Moral Imperative

We live in a moment of extraordinary contradiction in healthcare. The clinical tools available to physicians have never been more powerful. Gene therapies, precision oncology, robotic surgery, diagnostics powered by machine learning. The science of healing has advanced at a pace that would have seemed miraculous a generation ago. And yet for millions of patients, the bottleneck to receiving that care is not the science. It is the phone call. It is the fax. It is the insurance form. It is the human at the front desk who is doing their absolute best but simply cannot move fast enough.

This contradiction is not inevitable. It is a choice. Not a conscious, deliberate choice, but the accumulated result of an industry that has poured billions into clinical innovation while treating the operational layer of care as an afterthought. The result is a system that can cure diseases our grandparents never dreamed of curing but cannot reliably answer the phone when a patient calls to schedule the appointment.

AI does not solve every problem in healthcare. It does not replace the need for more physicians, for better insurance policy, for greater investment in underserved communities. But it solves the problem that sits between the patient and the care they have already been prescribed. It solves the operational gap. It ensures that when a physician says you need to see a specialist, the distance between that recommendation and the patient sitting in the specialist's chair is measured in days, not weeks. It ensures that every patient, regardless of the language they speak or the hour they call or the complexity of their insurance, receives the same standard of administrative care.

For practice owners, the decision to adopt AI is a decision about what kind of medicine you want to practice. Not in the clinical sense, where your expertise and judgment remain irreplaceable, but in the operational sense, where the question is whether every patient who needs you can actually reach you. Whether the infrastructure of your practice reflects the same commitment to excellence that you bring to your clinical work. Whether you are willing to use every tool available to ensure that no patient is lost to the machinery of administration.

The Oath, Extended

The Hippocratic tradition asks physicians to act in the interest of their patients above all else. For most of medical history, that obligation has been understood in clinical terms. Do not harm. Treat with skill. Stay current with the science. These commitments remain sacred.

But in 2026, the obligation has expanded. The tools have expanded. The definition of what it means to act in the interest of your patients now includes the systems you build around them, the accessibility of your practice, the reliability of your operations, the assurance that no one will fall through the cracks because your front desk was overwhelmed on a Tuesday afternoon.

Embracing AI in your practice is not a capitulation to the technology industry. It is not a concession that machines are better than people. It is the opposite. It is an acknowledgment that the people on your team deserve to spend their time on work that requires their humanity, and that your patients deserve a standard of access and responsiveness that no purely human system, however heroic, can deliver at the scale the modern clinic demands.

This is not about efficiency for its own sake. It is about the patients on the other end of the line. The ones who are waiting. The ones who gave up. The ones who never got through.

They are the reason AI in healthcare is not a technological question. It is an ethical imperative. And the time to act on it is not next quarter, not next year, not when the technology matures further or the market settles or the guidelines are clearer. The time is now. The patients cannot wait. They have been waiting long enough.


Join the Taxo Newsletter

Get the best sent to your inbox, every month

Join the Taxo Newsletter

Get the best sent to your inbox, every month

Join the Taxo Newsletter

Get the best sent to your inbox, every month

Related Articles