Detailed Abstract
[E-poster - Biliary & Pancreas (Others(ERAS, Education etc.))]
[EP 119] Development And Validation of a Novel Model for Surgical Instrument Recognition during Laparoscopic Cholecystectomy Using AI-based Automated Surgical Instrument Detection System
Jae Hyun KWON 1, Taeyong PARK 2, Soeui KIM 2, Jung-Woo LEE 1, Jong Woo LEE 1, Bum-Joo CHO 2
1 Department of Surgery, Hallym University Sacred Heart Hospital, REPUBLIC OF KOREA, 2 Medical Artificial Intelligence Center, Hallym University Medical Center, REPUBLIC OF KOREA
Background : To enhance the efficiency of surgical procedures and avoid complications such as retained instruments, it is crucial to identify the required surgical tools accurately. However, this process is often time-consuming and poses a challenge for researchers. To address this, we developed a system for detecting laparoscopic surgical instruments during laparoscopic cholecystectomy through virtual image creation. We aim to evaluate the system's performance and establish an effective method for instrument detection during this surgery.
Methods : Virtual laparoscopic surgical video images were generated by combining images of laparoscopic surgical instruments with background images from laparoscopic cholecystectomy videos. Background images underwent random adjustments in brightness and contrast, while laparoscopic surgical instruments underwent diverse modifications, including changes in brightness, contrast, width cropping, rotation, cutting, scaling, flipping, and perspective transformations. These transformations aimed to create realistic virtual images.
Results : The training dataset comprised 4100 virtual images extracted from 41 laparoscopic cholecystectomy videos, with an additional 578 real images from 48 patients serving as external validation datasets. After 500 iterations of training, the mean average precision (mAP) for instrument detection in the internal and external validation testing datasets was 0.993 and 0.841, respectively, with an intersection over union (IoU) of 0.25. For instrument classification, the mAP for the internal and external validation testing datasets was 0.999 and 0.959, respectively, with an IoU of 0.25.
Conclusions : This laparoscopic surgical instrument detection system offers a valuable tool for clinical and research communities, potentially enhancing the efficiency of video review processes in various minimally invasive surgeries.
Methods : Virtual laparoscopic surgical video images were generated by combining images of laparoscopic surgical instruments with background images from laparoscopic cholecystectomy videos. Background images underwent random adjustments in brightness and contrast, while laparoscopic surgical instruments underwent diverse modifications, including changes in brightness, contrast, width cropping, rotation, cutting, scaling, flipping, and perspective transformations. These transformations aimed to create realistic virtual images.
Results : The training dataset comprised 4100 virtual images extracted from 41 laparoscopic cholecystectomy videos, with an additional 578 real images from 48 patients serving as external validation datasets. After 500 iterations of training, the mean average precision (mAP) for instrument detection in the internal and external validation testing datasets was 0.993 and 0.841, respectively, with an intersection over union (IoU) of 0.25. For instrument classification, the mAP for the internal and external validation testing datasets was 0.999 and 0.959, respectively, with an IoU of 0.25.
Conclusions : This laparoscopic surgical instrument detection system offers a valuable tool for clinical and research communities, potentially enhancing the efficiency of video review processes in various minimally invasive surgeries.
SESSION
E-poster
E-Session 03/21 ALL DAY