A Methodology and Clinical Dataset with Ground-truth to Evaluate Registration Accuracy Quantitatively in Computer-assisted Laparoscopic Liver Resection
N. Rabbani, L. Calvet, Y. Espinel, B. Le Roy, M. Ribeiro, E. Buc and A. Bartoli
Purpose
Augmented Reality (AR) can assist Laparoscopic Liver Resection (LLR) by registering a preoperative 3D model to laparoscopic images. Evaluating the accuracy of the registration methods is thus tremendously important.
We provide an evaluation methodology with two criteria and a patient dataset with ground-truth tumour position, in order to establish a benchmark for registration methods.
Ground-truth was acquired using a Laparoscopic Ultrasound (LUS) probe coregistered with the laparoscope.
The two evaluation criteria are the Inclusion Criterion (IC), which is strict and necessary for safe clinical use, and a specifically adapted measure of Target Registration Error (TRE).
The inclusion criterion is binary: it is passed if and only if all the LUS tumour profiles lie within the registration-predicted tumour augmented by the oncologic margin of 1 cm. The TRE is computed as the average minimal distance between each LUS tumour profile and the registration-predicted tumour volume.
We strongly encourage users to send us their registration or evaluation results for inclusion on this page.
Resources and Downloads
Dataset (dataset for 4 tumours in 4 patients, released 07-sep-2021)
The dataset will be updated as new patient data become available
Code (code for error criteria computation and data visualisation, released 07-sep-2021)
Results (registration and evaluation results, released 07-sep-2021)
Paper (Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization 2021)
The dataset contains a folder for each patient; the data in each folder is organised as follows:
./Preoperative Model: 3D preoperative models for the liver surface and the tumour surface in ply and stl formats.
./Lap Images: A set of laparoscopic images. The filenames are two-digit unique codes used in the next folders to match the corresponding data.
./Lap Augmented by LUS: The laparoscopic images augmented with LUS cross-section images using the ground-truth coregistration for illustration.
./LUS Images: The LUS images acquired synchronously with the corresponding laparoscopic images.
./LUS Calibration and Pose: The LUS probe pose and calibration data in mat format.
./LUS Segmentation/json: A polygon enclosing the tumour in each LUS image in json format.
./LUS Segmentation/Machine Mask: A binary mask for the tumour pixels in each LUS image in png format.
./LUS Segmentation/Human Mask: The LUS images augmented by the segmented tumour area for illustration.
Additional files are available as follows:
Simplified preoperative models for the surfaces and landmarks with corresponding end points. Data (landmarks for all images in each of the 4 patients, released 20-apr-2022); used by Koo et al. 2017.
Preoperative models for the volumes obtained by Delaunay tetrahedrisation of the simplified preoperative models for the surfaces (leading to convex volumes), alternative simplified preoperative models for the surfaces (with fixed triangle overlap issues) and the volumes, obtained by constrained Delaunay tetrahedrisation via TetGen. Data (contributed by M. Labrunie, released 25-apr-2022).
User-guide
Uncompress the dataset and code archives in a common folder; the results archive is optional.
Run your registration method for all images of all patients; for that, you may follow the Registration.m script, which shows how to load the data and save the results. Save the registered 3D tumour model in stl format; use the same folder structure as for the already evaluated methods in the results archive.
Edit lines 5 and 6 of Evaluation.m to give the path to the dataset and your registration results.
Run Evaluation.m to calculate the IC and TRE.
Evaluation Results
The following table gives the preliminary evaluation of our reimplementation of two existing methods; the TRE is given in mm:
Patient 1
Patient 2
Patient 3
Patient 4
Overall
IC
TRE
IC
TRE
IC
TRE
IC
TRE
IC
TRE
Manual initialisation (25-apr-2022)
Failed
15.14
Failed
35.48
Failed
30.48
Failed
16.29
0 out of 4
24.35
Adagolodjo et al. 2017 (07-sep-2021)
Failed
8.25
Failed
37.25
Failed
28.40
Failed
15.83
0 out of 4
22.43
Koo et al. 2017 (07-sep-2021)
Failed
9.49
Failed
38.95
Failed
25.04
Failed
18.35
0 out of 4
22.95
Labrunie et al. 2022 (25-apr-2022)
Failed
14.84
Failed
N/A
Failed
22.40
Failed
7.23
0 out of 4
(14.82)
Manual initialisation - This single-image method involves the user to interactively define a rigid transformation.
Adagolodjo et al. 2017 - Silhouette-based pose estimation for deformable organs, application to surgical augmented reality. Y. Adagolodjo, R. Trivisonne, N. Haouchine, S. Cotin and H. Courtecuisse. IROS 2017. This single-image method uses the silhouette and biomechanics to refine an initial solution. It requires the silhouette to be given in the image. The tested implementation represents the silhouette as a list of image points. The used silhouette is the same as the silhouette and the lower ridge landmark used in Koo et al. 2017.
Koo et al. 2017 - Deformable registration of a preoperative 3D liver volume to a laparoscopy image using contour and shading cues. B. Koo, E. Özgür, B. Le Roy, E. Buc and A. Bartoli. MICCAI 2017. This single-image method uses anatomical landmark correspondences, namely the lower ridge and the liver to round ligament junction, the silhouette and biomechanics to refine an initial solution. It requires the landmarks to be given in the preoperative 3D model and in the image, and the silhouette to be given in the image. The tested implementation represents each landmark as a curve. A 3D model landmark is represented as a list of model vertices and a 2D image landmark and the silhouette as a list of image points. The tested implementation requires the two end points of each pair of 3D-2D landmarks to match.
Labrunie et al. 2022 - Automatic preoperative 3D model deformable registration in laparoscopic liver resection. M. Labrunie, M. Ribeiro, F. Mourthadhoi, C. Tilmant, B. Le Roy, E. Buc and A. Bartoli. IJCARS 2022. This single-image method automatically finds a rigid transformation from the same landmarks and silhouette as in Koo et al. 2017. However, it does not require the end points of the 3D-2D landmark pairs to match.
Acknowledgments
This research was supported by CNRS under the 2019-2020 prematuration grant Hepataug and Cancéropôle CLARA under the 2020-2024 Proof-of-Concept grant AIALO.
The code and dataset are made available under the GNU General Public License v3 for research purposes only and under the condition that the corresponding paper is properly cited.
You are explicitly requested to contact us for a use towards any other purposes.