Robot trained on surgery videos matches’ human doctor’s skill
Researchers from Johns Hopkins University have successfully used imitation learning to train surgical robots, allowing them to learn complex tasks without being programmed for every individual move. This development is being highlighted at the Conference on Robot Learning in Munich, reports a Kazinform News Agency correspondent.
“It's really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery,” said senior author Axel Krieger, an assistant professor in JHU’s Department of Mechanical Engineering. “We believe this marks a significant step forward toward a new frontier in medical robotics.”
The team, along with Stanford researchers, trained the da Vinci Surgical System to perform three essential surgical tasks: needle manipulation, tissue lifting, and suturing. The robot performed these tasks as skillfully as human doctors.
The model combines imitation learning with the machine learning architecture behind ChatGPT, but instead of working with language, it uses “robot kinematics” to break down robotic motions into mathematical data.
The team trained the model using hundreds of surgical videos recorded by da Vinci robots’ wrist cameras. These recordings are often archived after surgeries and provide valuable data for training robots. Nearly 7,000 da Vinci robots and 50,000 surgeons contribute to this vast video archive.
Though the da Vinci system is known to be imprecise, the team trained the model to focus on relative movements, making it more accurate. “All we need is image input and then this AI system finds the right action,” said lead author Ji Woong “Brian” Kim, a postdoctoral researcher at Johns Hopkins. “We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn't encountered.” The model even adapts to unexpected situations, like picking up a dropped needle automatically.
This approach could help train robots to perform complete surgeries in the future. Previously, programming robots for surgery was time-consuming, with each movement manually coded. Krieger emphasized, “We only have to collect imitation learning of different procedures, and we can train a robot in a couple days. It allows us to accelerate toward autonomy, reduce errors, and improve surgical precision”.