Synthesize pseudo-CT from NAC-PET, MRI, and 2D Topogram to enable CT-less reconstruction of PET images.
To reconstruct an accurate total-body PET image, you need a CT scan to correct for photons being attenuated by dense tissue like bone. But CT adds radiation — undesirable for dose-sensitive populations like pediatric or obstetric patients — and combined PET/MRI scanners have no CT at all. In both scenarios, we need a robust generative algorithm that can synthesize CTs.
Build a generative AI model that synthesizes a 3D pseudo-CT from whole-body MRI, non-attenuation-corrected PET (NAC-PET), and/or 2D X-ray-like Topograms. No knowledge of PET reconstruction is required to compete. The challenge is simply an image-to-image prediction task, and the reconstruction evaluation is handled for you.
You'll work with a novel, age- and sex-balanced dataset of total-body PET/CT/MR images and PET sinograms from 99 healthy volunteers, along with open-source STIR reconstruction containers — enabling closed-loop PET reconstruction accessible to anyone with Docker.
Tackle the challenge of fusing spatially unaligned information from 3D MRI with 3D NAC-PET and 2D Topograms to produce a coherent 3D pseudo-CT.
Evaluate the attenuation correction effect of your predicted CTs directly with local PET reconstruction via STIR — no proprietary vendor software required.
Access a unique, age- and sex-balanced dataset with total-body PET/CT/MR images and PET sinograms from 99 healthy volunteers — purpose-built for this challenge.
The advent of Long Axial Field-of-View (LAFOV) PET scanners has shifted the dosimetry paradigm in PET/CT imaging. The high sensitivity of these systems allows for substantial reductions in radiotracer activity, rendering the volumetric CT component the dominant source of ionizing radiation. For dose-sensitive populations such as pediatric and obstetric cohorts, eliminating the volumetric CT entirely is highly desirable. However, the CT serves a dual purpose: providing anatomical context and enabling attenuation correction (AC) for PET reconstruction, as the attenuation map is typically derived directly from the CT. Similarly, whole-body studies acquired on PET/MRI systems require estimation of the attenuation map from MR images. In both scenarios, the absence of a CT poses a reconstruction challenge.
To address this, the Big Cross-Modal Attenuation Correction (BIC-MAC) challenge tasks participants with synthesizing a 3D pseudo-CT from other available modalities. We present a novel multimodal dataset comprising whole-body PET, CT, Topogram (scout radiograph), and MRI for 99 healthy volunteers. The cohort is age- and sex-stratified, with data acquired on Siemens Biograph Vision Quadra and Siemens MAGNETOM Vida scanners. Participants will receive a training set of 75 cases containing Non-Attenuation Corrected (NAC) [18F]FDG PET images, scan-planning Topograms, and same-day DIXON MRI, alongside reference CT and CT-based attenuation-corrected PET (CTAC-PET) images. Critically, we also provide scatter maps, sinograms, and Docker containers with open-source reconstruction software, enabling closed-loop optimization on the training set - a capability previously restricted to hospital sites with access to proprietary vendor software.
The challenge comprises a single task: generate a pseudo-CT from the available input modalities. The pseudo-CT will be used to reconstruct PET images, which are then quantitatively compared against reference CTAC-PET images. Both static and dynamic PET reconstructions are evaluated to assess downstream accuracy across different clinical contexts. A defining technical characteristic of this challenge is the integration of modalities with different dimensionalities and acquisition geometries. While the 3D NAC-PET and 2D Topograms are spatially aligned with the target attenuation map, both lack anatomical detail. In contrast, whole-body MRI offers high bone and soft-tissue contrast but is acquired in a different scanner geometry with different patient positioning and body deformations. Consequently, participants must develop algorithms capable of fusing spatially unaligned information from 3D volumetric MRI with that of the 3D NAC-PET and 2D Topograms.
Registration opens via Google Forms
Training (n=75) and validation (n=4) cases released on Hugging Face
Teams can submit for pre-evaluation on validation data
Teams can submit Docker containers for final evaluation on test set
Final deadline for team registration
Pre-evaluation and final evaluation close
CodaBench leaderboard updated with test scores; top three winners announced
Meet the team behind the challenge
PhD Student, Rigshospitalet
Lead Organizer
Associate Professor, Technical University of Denmark
Co-Organizer
Professor, Rigshospitalet
Co-Organizer
Assistant Professor, KU Leuven
Co-Organizer
Student, Technical University of Denmark
Co-Organizer
PhD Student, Rigshospitalet
Co-Organizer
Professor, University College London
STIR Team
Professor, University of Groningen
STIR Team
PhD Candidate, University of Groningen
STIR Team
Senior Scientist, University of Groningen
STIR Team
Senior Researcher, Technical University of Denmark
Infrastructure Team
Associate Professor, Technical University of Denmark
Infrastructure Team
Associate Professor, Technical University of Denmark
Infrastructure Team
MD, Rigshospitalet
Clinical Team
Professor, Rigshospitalet
Clinical Team
Everything you need to participate in the challenge
Official challenge platform:
Ressources for getting started:
Follow these three steps to get started.
Create an account and register for the challenge on the Codabench platform.
Go to CodabenchFill out the team registration form to officially join the challenge.
Register HereRequest access and download the challenge dataset from Hugging Face.
Get Data on Hugging FaceQuestions? Contact us at bic-mac-challenge@outlook.com
We are grateful to the following organizations for their generous support of the BIC-MAC Challenge.
Center for Quantification of Imaging Data from Max IV, Technical University of Denmark.
Providing compute resources for running reconstructions during the final evaluation.
Collaborative Computational Project in Synergistic Reconstruction for Biomedical Imaging.
Supporting the prize pool, challenge design, and enabling STIR to work with Quadra data.
DEPICT | Centre of Diagnostic Investigation, Rigshospitalet
Challenge design, data acquisition, and inference hardware.