At TORTUS, we are building a next-generation AI co-pilot for physicians, a computer controlling agent called O.S.L.E.R. You will no doubt have many questions about what we are up to, and have found some answers perhaps already, but here we will try to outline the answer to the most important one: why?
Human beings make mistakes. This might vary wildly by industry, setting, individual, but fundamentally, humans are fallible. Medicine as it has been practiced for the last 150 years, is practiced by humans. Therefore medicine is fallible. Can we quantify this? Sadly yes. An estimated 800,000 individuals die each year from a medical error in the US alone. Every 44 minutes the NHS is sued successfully for serious harm or death due to medical negligence. This is just the tip of the iceberg.
In sixteen years of clinical practice as a physician, I’ve seen countless human errors. I’ve seen this professionally, and also personally.
So how does an error occur in medicine?
This is a field that has been studied extensively, but mostly comes down to “human factors”- distraction, cognitive bias, burnout. Rarely is a lack of knowledge the cause of an error. The commonest cause of complaints in healthcare is not a lack of knowledge, it’s from poor communication. As medicine becomes more complex, clinicians have less and less time and are increasingly burnt out, these errors will only increase.
If humans = fallible and that’s the fundamental unit of healthcare today, what if we start with a new fundamental equation?
Human+AI = infallible?
That’s what we believe is possible with AI-clinician co-working – eliminating the simple, boring errors that occur every day. While no doubt AI will push the limits of what we currently think is possible in medicine in all kinds of ways, we believe as a company that the highest leverage we can use AI for today is eliminating the basic stuff, the stupid stuff. We aren’t raising the bar for doctors, we are raising the floor.
So what does this look like, today and tomorrow?
Today, a patient might attend A&E with abdominal pain. The surgeon notes a tiny scar on their abdomen but the patient can’t remember what it was for as a child. An operation is scheduled to remove an appendix, but the patient doesn’t have one. Tomorrow that same patient would have an AI-generated summary from birth, noting their appendicectomy at age 4, and flagging that to the attending surgeon. Simple, and yet, these types of mistakes happen every single day. Incomplete documentation, missed drugs from prescriptions, tests requested but not ordered, appointments scheduled but not confirmed. In an industry where every single task is somebody’s life, we have to get this right. And that’s just the beginning, AI encodes clinical knowledge, can access sources of guidelines, and can access other, even more specific, clinical AI tools. This is the world of the 10x physician, a world where we simply don’t tolerate medical errors anymore.
While it may sound impossible to change the entire mentality of an industry, not only is it perfectly possible, it’s already happened. When they first tried to fly the earliest precursors to the modern airliner, the B17, it crashed. At one point the military declared such aircraft simply too complicated to fly. And then checklists were introduced, and co-pilots. Today, you are over 2,000x safer flying in a plane than driving in a car.
We believe not only that AI facilitates a new paradigm where this world is possible, we also believe there’s a moral imperative to accelerate adoption as fast and safely as possible. Because every hour we don’t, another person dies from a preventable error.
That’s what our mission is at TORTUS.
Our vision? Every clinician on the planet co-works with a personal AI, an AI like ours that controls the computer and accesses suites of curated, clinical-grade AI tools, to treat patients, and to save lives.
We think the future is so incredibly bright for healthcare. Hang tight, it’s nearly here.