Adrien Bartoli - Code and Datasets

Adrien Bartoli - Code and datasets

A Methodology and Clinical Dataset with Ground-truth to Evaluate Registration Accuracy Quantitatively in Computer-assisted Laparoscopic Liver Resection

N. Rabbani, L. Calvet, Y. Espinel, B. Le Roy, M. Ribeiro, E. Buc and A. Bartoli

Purpose

Augmented Reality (AR) can assist Laparoscopic Liver Resection (LLR) by registering a preoperative 3D model to laparoscopic images. Evaluating the accuracy of the registration methods is thus tremendously important. We provide an evaluation methodology with two criteria and a patient dataset with ground-truth tumour position, in order to establish a benchmark for registration methods. Ground-truth was acquired using a Laparoscopic Ultrasound (LUS) probe coregistered with the laparoscope. The two evaluation criteria are the Inclusion Criterion (IC), which is strict and necessary for safe clinical use, and a specifically adapted measure of Target Registration Error (TRE). The inclusion criterion is binary: it is passed if and only if all the LUS tumour profiles lie within the registration-predicted tumour augmented by the oncologic margin of 1 cm. The TRE is computed as the average minimal distance between each LUS tumour profile and the registration-predicted tumour volume.

We strongly encourage users to send us their registration or evaluation results for inclusion on this page.

Resources and Downloads

The dataset contains a folder for each patient; the data in each folder is organised as follows: Additional files are available as follows:

User-guide

  1. Uncompress the dataset and code archives in a common folder; the results archive is optional.
  2. Run your registration method for all images of all patients; for that, you may follow the Registration.m script, which shows how to load the data and save the results. Save the registered 3D tumour model in stl format; use the same folder structure as for the already evaluated methods in the results archive.
  3. Edit lines 5 and 6 of Evaluation.m to give the path to the dataset and your registration results.
  4. Run Evaluation.m to calculate the IC and TRE.

Evaluation Results

The following table gives the preliminary evaluation of our reimplementation of two existing methods; the TRE is given in mm:
Patient 1 Patient 2 Patient 3 Patient 4 Overall
ICTRE ICTRE ICTRE ICTRE ICTRE
Manual initialisation (25-apr-2022) Failed15.14 Failed35.48 Failed30.48 Failed16.29 0 out of 424.35
Adagolodjo et al. 2017 (07-sep-2021) Failed8.25 Failed37.25 Failed28.40 Failed15.83 0 out of 422.43
Koo et al. 2017 (07-sep-2021) Failed9.49 Failed38.95 Failed25.04 Failed18.35 0 out of 422.95
Labrunie et al. 2022 (25-apr-2022) Failed14.84 FailedN/A Failed22.40 Failed7.23 0 out of 4(14.82)

Acknowledgments

This research was supported by CNRS under the 2019-2020 prematuration grant Hepataug and Cancéropôle CLARA under the 2020-2024 Proof-of-Concept grant AIALO. The code and dataset are made available under the GNU General Public License v3 for research purposes only and under the condition that the corresponding paper is properly cited. You are explicitly requested to contact us for a use towards any other purposes.