AI, Shoulders, Knees and Toes: Startup Builds Deep Learning Tools for Orthopedic Surgeons


Traditional open surgeries require large incisions that provide doctors a broad view of the area they’re operating on. But surgeons are increasingly opting for minimally invasive techniques that rely instead on live video feeds from tiny cameras, which provide a more limited view past much smaller incisions.

The benefits for patients are clear: less blood loss, less pain and faster recovery times.

However, minimally invasive procedures are more technically demanding for surgeons since they must operate with a narrow field of view and use small instruments that require fine manipulation skills.

To give surgeons an assist, Kaliber Labs, a San Francisco-based startup, is developing AI models to interpret these video feeds in real time.

The company’s deep learning models recognize and measure aspects of a patient’s anatomy and pathology, as well as display key information and treatment recommendations on operating room video monitors.

“A surgery consists of a series of steps,” said Ray Rahman, founder and CEO of Kaliber Labs, a member of the NVIDIA Inception virtual accelerator program. “We’re going through the entire process to provide surgeons AI guidance that decreases their cognitive load, improves accuracy and reduces uncertainty.”

The startup is also developing a deep learning model that annotates surgical video after procedures to provide better communication and transparency with patients.

Its AI models — developed using the Keras, PyTorch and TensorFlow deep learning frameworks — are trained and tested on NVIDIA RTX GPUs featuring Tensor Cores, shrinking training times by more than 5x.

To develop tools that process real-time video input in the operating room, Kaliber Labs uses the JetPack SDK and NVIDIA Jetson TX2 AI computing device for inference at the edge. The team plans for its deployed product to run on the NVIDIA Jetson AGX Xavier, enabling the low latency required for real-time processing.

Keeping an AI on Operating Rooms

Kaliber Labs’ current suite of AI tools are focused on orthopedic surgery — covering shoulder, knee, hip and wrist procedures. Arthroscopy, or minimally invasive joint surgery, is the most common orthopedic operation, used to treat many disorders and sports injuries.

During a minimally invasive orthopedic procedure, surgeons rely on video monitors to view the area they’re operating on. (U.S. Air Force photo/Airman 1st Class Kevin Tanenbaum)

At the start of a procedure, the Kaliber Labs’ deep learning tools use the video feed to identify what kind of surgery is taking place and which camera view is being used. Then, AI models specific to the relevant procedure type come into play for real-time guidance.

Surgeons begin with an initial assessment of the patient’s anatomy and pathology before picking a course of action for the operation. The startup’s models aid in this process, combining with computer vision algorithms to recognize and measure, for example, a 20 percent bone defect of the shoulder socket, or glenoid cavity, during the procedure.

Such real-time quantitative analyses provide orthopedic surgeons with greater objectivity and an extra layer of insight as they make intraoperative decisions.

So far, Kaliber Labs has finished developing its shoulder surgery algorithms and is working on its models for knee and hip procedures. Its deep learning tools are trained on thousands of hours of actual surgery videos, which are first processed by an AI algorithm that scrubs the footage to delete any personally identifiable information about patients and surgeons.

The startup recently signed an agreement with a major medical device company to build a Jetson Xavier-powered AI edge machine that integrates with operating room equipment to provide intraoperative guidance. To work in real time during a surgical procedure, Rahman says a GPU at the edge is essential.

“We run a cascade of models for detecting anatomy and pathology, and various measurement algorithms,” he said. “Since we’re doing real-time video inference, our inference has to occur in less than 30 milliseconds in order to avoid perceived lag by the surgeons.”

The NVIDIA Jetson platform enables edge computing with a combination of high GPU compute performance and low power usage. Kaliber Labs chose the Jetson Xavier embedded module due to its small footprint and wide range of options for systems integration, Rahman said.

Running on Jetson Xavier, the startup’s CNN binary classification model — optimized for inference using NVIDIA TensorRT software — has a latency rate of just 1.5 milliseconds.

For the Record: Analyzing Surgical Video Post-Op

After an operation, patients typically receive a short debrief from their surgeon, who shares key snapshots from the surgery. These photos or video segments have limited value to patients because, to the untrained eye, it’s hard to get a sense of what’s taking place in the procedure without context and labels of the anatomy.

“Patients and their families want to know what the surgeons did, what they saw during the procedure,” Rahman said, “but nobody has time to manually annotate a whole video. That would take hours and days, and it’d be prohibitively expensive.”

Kaliber Labs is developing a set of AI models that analyzes and labels the surgical video with descriptions of each step in the procedure. Providing patients with annotated footage of a surgery could be useful to those curious about their operation, and improve transparency about what took place during the surgery.

This kind of operative record could also facilitate accurate medical coding and efficient billing.

Main image shows an orthopedic surgeon performing ACL surgery. (U.S. Air Force photo/Airman 1st Class Kevin Tanenbaum)

Recent posts

error: Content is protected !!