X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data

1Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland, 2Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland

Abstract

Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow.

Video

Future Work

We are currently preparing a follow-up publication that details the creation of a paired dataset comprising synthetic and real X-ray images. This dataset has played a pivotal role in bridging the domain gap, resulting in substantial improvements in X23D's performance when applied to real X-rays You can find the pre-print here.

BibTeX

@article{jecklin2022x23d,
  author    = {Jecklin, Sascha and Jancik, Carla and Farshad, Mazda and F{\"u}rnstahl, Philipp and Esfandiari, Hooman},
  title     = {X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data},
  journal   = {Journal of Imaging},
  volume    = {8},
  number    = {10},
  pages     = {271},
  year      = {2022},
  publisher = {MDPI}
}