Synthesize pseudo-CT from NAC-PET, MRI, and 2D Topogram to enable CT-less reconstruction of PET images.
Modern PET scanners are incredibly sensitive, but the CT component used for attenuation correction remains the dominant source of radiation. Can we eliminate the CT scan entirely while maintaining accurate PET reconstructions?
Synthesize pseudo-CT images from non-attenuation-corrected PET, whole-body MRI, and 2D topograms. Your pseudo-CTs will be used to reconstruct PET images, which are evaluated against reference CT-based reconstructions across both static and dynamic imaging protocols.
You'll work with a unique multimodal dataset of 100 healthy volunteers with age- and sex-stratified whole-body PET/CT and MRI acquisitions, along with open-source reconstruction tools to enable closed-loop optimization—previously only possible at sites with proprietary vendor software.
Combine information from whole-body NAC-PET, MRI, and 2D Topograms to synthesize accurate pseudo-CT images for attenuation correction reconstruction.
Optimize directly on reconstructed PET images using provided STIR reconstruction containers - no need for proprietary vendor software.
Enable CT-less imaging to reduce radiation exposure for dose-sensitive populations, and enable attenuation correction in PET/MRI studies.
The advent of Long Axial Field-of-View (LAFOV) PET scanners has shifted the dosimetry paradigm in PET/CT imaging. The high sensitivity of these systems allows for substantial reductions in radiotracer activity, rendering the volumetric CT component the dominant source of ionizing radiation. For dose-sensitive populations such as pediatric and obstetric cohorts, eliminating the volumetric CT entirely is highly desirable. However, the CT serves a dual purpose: providing anatomical context and enabling attenuation correction (AC) for PET reconstruction, as the attenuation map is typically derived directly from the CT. Similarly, whole-body studies acquired on PET/MRI systems require estimation of the attenuation map from MR images. In both scenarios, the absence of a CT poses a reconstruction challenge.
To address this, the Big Cross-Modal Attenuation Correction (BIC-MAC) challenge tasks participants with synthesizing a 3D pseudo-CT from other available modalities. We present a novel multimodal dataset comprising whole-body PET, CT, Topogram (scout radiograph), and MRI for 100 healthy volunteers. The cohort is age- and sex-stratified, with data acquired on Siemens Biograph Vision Quadra and Siemens MAGNETOM Vida scanners. Participants will receive a training set of 75 cases containing Non-Attenuation Corrected (NAC) [18F]FDG PET images, scan-planning Topograms, and same-day DIXON MRI, alongside reference CT and CT-based attenuation-corrected PET (CTAC-PET) images. Critically, we also provide scatter maps, sinograms, and Docker containers with open-source reconstruction software, enabling closed-loop optimization on the training set - a capability previously restricted to hospital sites with access to proprietary vendor software.
The challenge comprises a single task: generate a pseudo-CT from the available input modalities. The pseudo-CT will be used to reconstruct PET images, which are then quantitatively compared against reference CTAC-PET images. Both static and dynamic PET reconstructions are evaluated to assess downstream accuracy across different clinical contexts. A defining technical characteristic of this challenge is the integration of modalities with different dimensionalities and acquisition geometries. While the 3D NAC-PET and 2D Topograms are spatially aligned with the target attenuation map, both lack anatomical detail. In contrast, whole-body MRI offers high bone and soft-tissue contrast but is acquired in a different scanner geometry with different patient positioning and body deformations. Consequently, participants must develop algorithms capable of fusing spatially unaligned information from 3D volumetric MRI with that of the 3D NAC-PET and 2D Topograms.
Training (n=75) and validation (n=4) cases released on Hugging Face
Registration opens via Google Forms
Teams can submit for pre-evaluation on validation data
Teams can submit Docker containers for final evaluation on test set
Final deadline for team registration
Pre-evaluation and final evaluation close
CodaBench leaderboard updated with test scores; top three winners announced
Meet the team behind the challenge
PhD Student, Rigshospitalet
Lead Organizer
Associate Professor, Technical University of Denmark
Co-Organizer
Professor, Rigshospitalet
Co-Organizer
Assistant Professor, KU Leuven
Co-Organizer
Student, Technical University of Denmark
Co-Organizer
PhD Student, Rigshospitalet
Co-Organizer
Professor, University College London
STIR Team
Professor, University of Groningen
STIR Team
PhD Candidate, University of Groningen
STIR Team
Senior Scientist, University of Groningen
STIR Team
Senior Researcher, Technical University of Denmark
Infrastructure Team
Associate Professor, Technical University of Denmark
Infrastructure Team
Associate Professor, Technical University of Denmark
Infrastructure Team
MD, Rigshospitalet
Clinical Team
Professor, Rigshospitalet
Clinical Team
Everything you need to participate in the challenge
Official challenge platform with rules, and submissions:
Code resources to get you started:
Follow these three steps to get started.
Fill out the team registration form to officially join the challenge.
Register HereCreate an account and register for the challenge on the Codabench platform.
Go to CodabenchRequest access and download the challenge dataset from Hugging Face.
Get Data on Hugging FaceCheck out our GitHub repository for documentation, FAQs, and community discussions.
Visit GitHubWe are grateful to the following organizations for their generous support of the BIC-MAC Challenge.
Center for Quantification of Imaging Data from Max IV, Technical University of Denmark.
Providing compute resources for running reconstructions during the final evaluation.
Collaborative Computational Project in Synergistic Reconstruction for Biomedical Imaging.
Supporting the prize pool, challenge design, and enabling STIR to work with Quadra data.