Friends of Endoscopy
  • Home
  • Quiz Cases
  • Endoscopy Long Cases
  • Short Teaching Clips
  • Blackboard teaching
  • Podcasts
  • Core Reading
    • Basic concepts core reading
    • Gastroscopy core reading
    • Colonoscopy core reading
    • QA core reading
  • About
    • Our sponsors
  • Home
  • Quiz Cases
  • Endoscopy Long Cases
  • Short Teaching Clips
  • Blackboard teaching
  • Podcasts
  • Core Reading
    • Basic concepts core reading
    • Gastroscopy core reading
    • Colonoscopy core reading
    • QA core reading
  • About
    • Our sponsors
Search
Picture
Our podcasts give you an update on the latest Endoscopy related developments. A new episode is launched every few weeks.   Listen on the Podcast app of your choice !

Artificial Intelligence

30/11/2020

0 Comments

 
Picture
Monz Ahmed, Consultant Gastroenterologist in Birmingham is reporting on Artificial Intelligence which was the Big Ticket Event at the virtual UEG Week. Four entire sessions were dedicated to the topic! 
Transcript
 SESSION 1: AI for automation in endoscopy and surgery (3)
 1.      State of the Art and general perspectives on AI/ D. Stoyanov, UCL

AI can be useful to solve 3 types of challenges:
1.Navigation: within environment, shape of lumen, identify anatomy
2.CAD: computer aided detection (CADe).. mature tech, regulatory approval, available for clinical use
3.CAD: computer aided diagnosis (CADx)… emerging tech
 
Examples of AI in endoscopy:
■  CADDIE – Odin Vision Ltd, start-up... polyp detection software.. rectangle drawn around polyp. The system uses real time machine learning algorithms to analyse colonoscopy images and support doctors to identify and characterize polyps during colonoscopy procedures. The system is cloud deployed and has the capability to scale across the whole of the NHS.
 
■  Showed an abstract concerning polyp segmentation using a hybrid 2D/3D CNN (Convolational Neural Network) .. evaluated in 46 patients with 53 polyps, 560000 image frames…. Superior to normal spatial model

(The term “convolutional neural network” indicates that the network employs a mathematical operation called convolution. Convolution is a specialized kind of linear operation. Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers).
 
■ Apart from polyp detection, speaker also showed software which can detect various upper GI structures in an endoscopy video: He et al 2020: Deep learning based anatomical site classification for UGI endoscopy
This AI is trained to run over a video to make sure that predefined sites have been recorded.
 
■  Also showed software which estimates 3D shape of environment and how camera moves within it…. fly through simulation.
 
Q&A session:
Q. Pilcam and AI: reduce time to analyse… use AI as support tool, still need Dr to check. Small bowel may be ideal place to use AI to reduce analysis time.
Q. Automation of resection (EMR, ESD etc)?... detection of polyp can be done, far away from resection.
Q.  AI fitting into clinical workflow?....  supportive tool
 
 
2.      Exploring autonomy in Robotic Colonoscopy/ P. Valdastri from Leeds (Chair in Robotics and autonomous systems, director of STORM)
 
General discussion on  autonomy:
Refers to an editorial in Science Robotics (Yang et al 2017) which defines various grades of autonomy :
https://robotics.sciencemag.org/content/2/4/eaam8638.full
 
no autonomy… robot is operated by surgeon eg da vinci robot assistance.. task autonomy… conditional autonomy… high autonomy… full automation
Level 0: No autonomy. This level includes tele-operated robots or prosthetic devices that respond to and follow the user’s command. A surgical robot with motion scaling also fits this category because the output represents the surgeon’s desired motion.
Level 1: Robot assistance. The robot provides some mechanical guidance or assistance during a task while the human has continuous control of the system. Examples include surgical robots with virtual fixtures (or active constraints) (2) and lower-limb devices with balance control.
Level 2: Task autonomy. The robot is autonomous for specific tasks initiated by a human. The difference from Level 1 is that the operator has discrete, rather than continuous, control of the system. An example is surgical suturing (3)—the surgeon indicates where a running suture should be placed, and the robot performs the task autonomously while the surgeon monitors and intervenes as needed.
Level 3: Conditional autonomy. A system generates task strategies but relies on the human to select from among different strategies or to approve an autonomously selected strategy. This type of surgical robot can perform a task without close oversight. An active lower-limb prosthetic device can sense the wearer’s desire to move and adjusts automatically without any direct attention from the wearer.
Level 4: High autonomy. The robot can make medical decisions but under the supervision of a qualified doctor. A surgical analogy would be a robotic resident, who performs the surgery under the supervision of an attending surgeon.
Level 5: Full autonomy (no human needed). This is a “robotic surgeon” that can perform an entire surgery. This can be construed broadly as a system capable of all procedures performed by, say, a general surgeon. A robotic surgeon is currently in the realm of science fiction.
 
Speaker showed demonstration of the Magnetic Flexible Endoscope (MFE): originated from European project 2010. Magnetic coupling is used to pull the tip of the endoscope… reduce trauma… likened to “front wheel drive” endoscope. The body of the scope does not need to be stiff because it is being pulled. Device has illumination module, camera, irrigation nozzle, instrument channel.
 
In early studies (Arezzo et al 2013), user controlled external magnets with joysticks in model of colon ex vivo: navigation and diagnostic accuracy comparable to standard colonoscopy but robotic procedure was 3x slower!
 
System was enhanced with Real-Time Pose/Force Detection which allowed it to  sense in real time the position of the tip of the scope.
 
1st level of automation: robot supervised tele-operation: user is controlling tip of endoscope with joystick in a model looking at the image on a screen. 4 way movement. Like driving a car… the robot decides how to move the magnets in response to joystick movement.
 
2nd level: task automation
-e.g. autonomous retroflexion. At press of button, system computes best trajectory for retroflexion and moves external magnets. Pig model. 100% success in pig models, task takes average of 11 sec.
-autonomous microUS imaging: animal experiments
 
3rd level: autonomous navigation with lumen detection (Martin et al 2020)
non live models used: average time to caecum 4 min, 10 users, 5 reps each, 100% success. System identifies lumen for image and direct tip in that direction.
Validated in pig animal model. Could navigate to up to 85cm into colon.
 
Q&A session:
Q. Magnets pull from front of tip = front wheel drive (cf rear wheel drive for normal colonoscopy)… so the body of the scope is very flexible… “looping will be negligible”, less force used, less pain thoretically.
Q. How to handle peristalsis?: lumen is insufflated, “peristalsis is not a problem”
Q. Sharp flexure/ angulation?: “able to navigate pretty sharp bends” with magnets + insufflation
  
3.      Lower GI polyp detection and differentiation/ A. Repici, MILAN
CRC cancer increasing in USA, EU etc.
Adenoma may be missed in 27%
Adenoma detection rates varies a lot in Italy 1.7% - 36.8% (Zorzi 2017… 50K colonoscopies)… threshold is 20%.
 
Colonoscopy is an imperfect tool: missed polyps, interval cancer, ADR variability among operators, heterogeneity is histology prediction.
Human factors: skill, dedication, image interpretation, frame capture, speed of analysis
 
AI Universe in endoscopy:
-Fujifilm: CAD-EYE  *
-Medtronic: GI-Genius *
-Olympus: Endobrain
-Pentax *
-AI-Wilson
-Endo-Angel *
-Doc-bot
-AI4GI
-NEC
 
SESSION 2: Will AI change our practice in endoscopy 
1.     Moderated poster/ Y Mori: Japan/ Oxford/ Norway/USA
Looked at economic benefits of AI in colonoscopy.
Study was an add on analysis of a clinical trial (Ann Intern med 2018) that investigated performance of AI in differentiating colorectal polyps (neoplastic vs non- neo). >90%PPV in rectosigmoid. Included all patients with diminutive (<=5mm) rectosigmoid polyps for analysis.  N=250

Two scenarios analysed: 
A: diagnose and leave strategy supported by AI (ie AI predicted non neoplastic polyp). 
105 polyp removed, 145 polyps left
B: a resect all polyp strategy
250 polyps removed, no polyps left
 
Strategy A reduces cost by 7-20% depending on country= millions of dollars/ year
Conclusions: AI and diagnose/leave saves money.
 
Study subsequently accepted for publication in GIE journal, October 2020 issue (youtube video).
 

SESSION 3: AI: abstract-based session (3)
1.      Size Matters: is AI using computer vision better than human humans in sizing colonc polyps?/ Mo Abdelrahim…P Bhandari/ Portsmouth + Japan
 
Polyp size is important biomarker
-          Related of risk of dysplasia/ adenoca
-          Therapeutic implications… eg resect and discard
-          5mm cutoff is important
-          hard to estimate polyp size 
Aim: to develop automated system for binary classification of polyp size. 
              To compare its performance to that of endoscopists at various levels of experience
Method: artificially made premeasured polyps fixed in pig models
Then colonoscopy of pig colon done and recorded
Computer Vision (CV) used

Q&A 
Computer Vision is technology used… not deep learning…
Structure for Motion = algorithm used…3D image constructed from 2 D image using triangulation.
 
 
2.Machine Learning Models for the Prediction of Risk of gastric cancer after HP eradication therapy/ W. Leung, HK, China
 HP is class I gastric carcinogen, risk of GC at least 2x in HP infected individuals.
Hp eradication reduces cancer risk by 46%.. but some will progress to GC even after HP erad
Deep machine learning used to predict GC risk after HP eradication.
Training set (64k) and Validation cohort  (25k)of patients
26 clinical variables used in models
Outcome development of GC within 5 yrs of HP eradication
7 different algorithms analysed.. ROC for each


3.Unexperienced endoscopists can reach expert level in detecting and characterising colorectal polyps by using a validated detection and characterisation system/ J Weight
 Strong need to increase ADR (to reduce risk of cancer)
Still 1/5 polyp is missed in colonoscopy
Fujifilm developed CAD Eye to detect and characterise polyps
Aim: to  evaluate above system for polyp detection and characterisation.
Methods: 4 centres:Magdeburg, Milan, Rome, Mainz
Eluxeo Series Fujifilm (700)
Annotation according to findings and histology
Development of CadEye system

Gives likelihood of neoplastic or hyperplastic polyps
Validation of CAD Eye system on still images
              3 experts and 3 beginners
              experts alone vs non experts + CadEye
              Detection: 458 WLE images, 455 LCI images
              Characterisation: 133 WLE, 134 BLI images
Images presented for 5 seconds

Conclusions: new system has impact on adenoma detection and correct classification of polyps. Beneficial for non experts. Experts may also use
Weakness: no comparison without CadEye (increase?)
Trials re planned for real time use in clinical settings.
 
SESSION 4: Beyond our Eyes: AI enhanced endoscopy (5)

1.      DEVELOPMENT OF AN ORIGINAL AUTOMATED METHOD OF THREE-DIMENSIONAL RECONSTRUCTION OF AN EXTENDED FIELD OF VIEW OF THE GASTRIC ANTRUM by T. Bazin/ France
 
Problem: detailed description of digestive mucosa by endoscopy: lack of inter and intra observer reproducibility
>3D reconstruction of a mucous surface from endoscopic images
              reproducibility
              reinterpretation over time
Ø  No method of extending the 3D fov has been described for the digestive tract
 
Aim: use AI algorithm to reconstruct extended detailed 3d field of view of antrum using recording of endoscopy in WL and BG light.
8HD videos used to train the system (Olympus)
correction of camera distortion
reconstruction of mucosal surface involved 3 stages: complex calculations… point cloud is use dto build mesh surface
Fully automated method.. can deliver 3D surface of antrum about 1 hour after the end of recording
Obtained precise 3D reconstruction of surface of antral mucosa
 
Pradeep Bhandari asked about clinical value of this system… vague answer
Resolution may be limited.. working on higher res

2.      HIGHLY ACCURATE AI SYSTEMS TO PREDICT THE INVASION DEPTH OF GASTRIC CANCER: EFFICACY OF CONVENTIONAL WHITE-LIGHT IMAGING, NON-MAGNIFYING NARROW-BAND IMAGING AND INDIGO-CARMINE DYE CONTRAST IMAGING by S. Nagao/ Tokyo

Gastric cancer (GC) is 2nd or 3rd leading cause of cancer related death in world
High S5yr among patients with early GC
Early GC is good target for endoscopic resection… early detection important

Macro features and eUS not very accurate in diagnosing early GC.

Previous reports (Zhu et al and Yoon et al) : accuracy of AI 0.8916/ -, PPV 0.8966/ 0.780, NPV 0.8897 /0.793 
This study: aim was to develop new AI systems to more accurately predict dept of invasion of GC
 
60000 images collected from 1800 cases of GC for which oncosurgery was performed
Cases randomly assigned to training or ? 4:1 ratio
AI looked at images using WLI NBI and indigo and output a probability score for invasive cancer

Results:
baselines characteristic similar

?definition of correct diagnosis: if >=5/10 images of same lesion were correctly diagnosed – then diagnosis was “correct”

3.      USEFULNESS OF THE ALGORITHM OF ALL-IN-FOCUSED IMAGES IN IMAGED ENHANCED ENDOSCOPY FOR COLORECTAL NEOPLASM by T. Yamamura/ Nagoya, Japan
Magnifying endoscopy is useful in assessment of invasive depth in colorectal neoplasms with image enhanced endoscopy (IEE).
Some part s of the image may be in focus and other parts out of focus because e of the depth of the target + peristalsis 

AIF algorithm puts many images together to make one image is fully in focus, (surface pattern), vessel pattern and recognition of diagnosis (JNET classification), pit pattern: all scores improved with AIF
Not much diff in decision time No sig diff in accuracy
Technique may be of benefit for the beginner.
Lag in processing of image = 30 secs, with increased processing power, may be possible to process image in real time.
 
4.      DEVELOPMENT AND REGULATORY APPROVAL OF AN ARTIFICIAL INTELLIGENCE-ASSISTED DETECTION SYSTEM FOR COLONOSCOPY by T. Matsuda/ Tokyo and Nagoya, Japan
Colon polypectomy reduces CRC mortality: 53% over 20yrs
ADR is a quality indicator for colonoscopy 22%
 
Currently some CADe systems are available in EU: GI Genius (Medtronic), Discovery (Pentax), CAD Eye (Fujifilm).. all approved in EU
Authors have developed CAD system and have obtained regulatory approval.
FIRST officially approved AI system in Japan.
Lot of small polyps in dataset
?efficacy of AI picking up SSL and NGLST
Colitis patients were excluded
 
5.      ARTIFICIAL INTELLIGENCE USING CONVOLUTIONAL NEURAL NETWORKS FOR DETECTION OF EARLY BARRETT'S NEOPLASIA by M. Abdelrahim/ Portsmouth and Tokyo
Incidence of Barrett’s neoplasia has risen in recent years.
Early detection is key to improve prognosis
Early Barrett’s neoplasia can be difficul to detect during endoscopy
Hence quadrantic bx: expensive, time consuming, miss rate
This talk: on detection and delineation of Barretts mucosa
Aim: develop and validate a deep learning system for detection and delineation of Barrett’s neoplasia
Method: data collection: 
-621 HD white light images on neoplastic BE from 43 patients
-23183 images/frames of non neoplastic BEW from 44 patients
- histologically confirmed
Data interpretation: 
-          Marked and annotated using specially designed software
-          Review by 2 expert

Data divided into 3 subsets which were used for training, validation and testing ofr  the system
Visual Geometric Group architecture for binary classification
SegNet architecture for delineation
Speed very fast compared to human visual response
For segmentation, a metric called IoU (intersection of union) was used: measures overlap between correct position and estimated position
Good results but room for improvement.
Hot spot on right where lesion is
Can also delineate more subtle lesions.. impressive
Delineation Works in real time
 
Conclusion:
High sensitivity, specificity and accuracy
Ultra short processing time
Needs validation on larger scale real time studies
Q: HD white light use din this study. Maybe indigo , NBI may assist the AI
Q: AI in training? Yes, AI will help trainees. Will AI make people lazy?!
0 Comments

    Archives

    June 2022
    March 2022
    November 2021
    September 2021
    August 2021
    July 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020

    Categories

    All
    AI
    Anal Cancer
    Conference
    Contamination
    CRC
    EUS
    HPB
    IBD
    Neuroendocrine
    New Treatment Approach
    Podcast
    Publications Currently 'In Press'
    QA
    Quality Assurance
    Quiz
    Recent Research
    Screening
    Screening & Surveillance

    RSS Feed

  • Home
  • Quiz Cases
  • Endoscopy Long Cases
  • Short Teaching Clips
  • Blackboard teaching
  • Podcasts
  • Core Reading
    • Basic concepts core reading
    • Gastroscopy core reading
    • Colonoscopy core reading
    • QA core reading
  • About
    • Our sponsors