CARS 2019 Tutorial

 

Deep learning and computer vision for real-time procedure annotation

Date and time: June 18, 2019,  1:30 pm – 5:30 pm
Location: CARS conference location – Salle 8

Thank you everyone who participated in this tutorial. We will migrate this material to the main tutorial page soon.

How to prepare

If you would like to follow the hands-on tutorial, please come with your own laptop, and preferably install the following software:

  • Our main platform is Windows, but the tutorial should work on Mac OS as well. Most of the tutorial works on Linux, except real-time compressed video saving.
  • To save time at the tutorial event, please review the first part of the hands-on tutorial, and set up as much software as you can.
  • Tutorial data: Download HandData folder from this link

Event organizers

  • Javier Pascau, Universidad Carlos III de Madrid, Madrid, Spain
  • Tamas Ungi, Queen’s University, Kingston, Canada
  • Sonia Pujol, Harvard Medical School, Boston, MA, USA
  • Gabor Fichtinger, Queen’s University, Kingston, Canada

Hands-on session instructors

  • David García Mato, Universidad Carlos III de Madrid
  • Mark Asselin, Queen’s University, Kingston, Canada

The tutorial consists of two sessions. First, invited speakers give an overview of open-source resources and talk about their vision for the future applications of these research tools. In the second session, the audience will build a working surgical video annotation software on their laptops, using devices provided by the presenters. Participants will gain hands-on experience in the basics of interfacing with hardware devices, data management, deep learning, and real-time deployment of trained image classifiers.

Preliminary program

[table]

Time|Title

1:30|Welcome and introduction

1:40|Alexandra Golby, Harvard Medical School, USA

2:00|Danail Stoyanov, University College London, UK

2:20|Tamas Ungi, Queen’s University, Canada

2:45|Hands-on tutorial, part 1 (software and data download/setup)

3:30|Coffee break

4:00|Hands-on tutorial, part 2 (data collection, deep learning and trained model deployment)

5:30|Adjourn

[/table]

 

Acknowledgment

Special thanks to Andras Lasso, Kyle Sunderland, Jean-Christophe Fillion-Robin, Sam Horvath, Hastings Greer, and Stephen Aylward for their efforts in making this tutorial. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 – Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program

toolkit for navigated interventions