Hero background
MICCAI 2026 Challenge

Big Cross-Modal
Attenuation Correction
Challenge

Synthesize pseudo-CT from NAC-PET, MRI, and 2D Topogram to enable CT-less reconstruction of PET images.

Rigshospitalet Technical University of Denmark University of Copenhagen KU Leuven QIM SyneRBI

About the Challenge

Modern PET scanners are incredibly sensitive, but the CT component used for attenuation correction remains the dominant source of radiation. Can we eliminate the CT scan entirely while maintaining accurate PET reconstructions?

Your Task

Synthesize pseudo-CT images from non-attenuation-corrected PET, whole-body MRI, and 2D topograms. Your pseudo-CTs will be used to reconstruct PET images, which are evaluated against reference CT-based reconstructions across both static and dynamic imaging protocols.

You'll work with a unique multimodal dataset of 100 healthy volunteers with age- and sex-stratified whole-body PET/CT and MRI acquisitions, along with open-source reconstruction tools to enable closed-loop optimization—previously only possible at sites with proprietary vendor software.

Diagram showing modalities workflow

What Makes This Challenge Unique?

Multimodal Fusion

Combine information from whole-body NAC-PET, MRI, and 2D Topograms to synthesize accurate pseudo-CT images for attenuation correction reconstruction.

Closed-Loop Optimization

Optimize directly on reconstructed PET images using provided STIR reconstruction containers - no need for proprietary vendor software.

Dose Reduction

Enable CT-less imaging to reduce radiation exposure for dose-sensitive populations, and enable attenuation correction in PET/MRI studies.

Challenge Abstract

The advent of Long Axial Field-of-View (LAFOV) PET scanners has shifted the dosimetry paradigm in PET/CT imaging. The high sensitivity of these systems allows for substantial reductions in radiotracer activity, rendering the volumetric CT component the dominant source of ionizing radiation. For dose-sensitive populations such as pediatric and obstetric cohorts, eliminating the volumetric CT entirely is highly desirable. However, the CT serves a dual purpose: providing anatomical context and enabling attenuation correction (AC) for PET reconstruction, as the attenuation map is typically derived directly from the CT. Similarly, whole-body studies acquired on PET/MRI systems require estimation of the attenuation map from MR images. In both scenarios, the absence of a CT poses a reconstruction challenge.

To address this, the Big Cross-Modal Attenuation Correction (BIC-MAC) challenge tasks participants with synthesizing a 3D pseudo-CT from other available modalities. We present a novel multimodal dataset comprising whole-body PET, CT, Topogram (scout radiograph), and MRI for 100 healthy volunteers. The cohort is age- and sex-stratified, with data acquired on Siemens Biograph Vision Quadra and Siemens MAGNETOM Vida scanners. Participants will receive a training set of 75 cases containing Non-Attenuation Corrected (NAC) [18F]FDG PET images, scan-planning Topograms, and same-day DIXON MRI, alongside reference CT and CT-based attenuation-corrected PET (CTAC-PET) images. Critically, we also provide scatter maps, sinograms, and Docker containers with open-source reconstruction software, enabling closed-loop optimization on the training set - a capability previously restricted to hospital sites with access to proprietary vendor software.

The challenge comprises a single task: generate a pseudo-CT from the available input modalities. The pseudo-CT will be used to reconstruct PET images, which are then quantitatively compared against reference CTAC-PET images. Both static and dynamic PET reconstructions are evaluated to assess downstream accuracy across different clinical contexts. A defining technical characteristic of this challenge is the integration of modalities with different dimensionalities and acquisition geometries. While the 3D NAC-PET and 2D Topograms are spatially aligned with the target attenuation map, both lack anatomical detail. In contrast, whole-body MRI offers high bone and soft-tissue contrast but is acquired in a different scanner geometry with different patient positioning and body deformations. Consequently, participants must develop algorithms capable of fusing spatially unaligned information from 3D volumetric MRI with that of the 3D NAC-PET and 2D Topograms.

Important Dates

Data Released

Training (n=75) and validation (n=4) cases released on Hugging Face

April 1, 2026

Registration Opens

Registration opens via Google Forms

April 1, 2026

Pre-evaluation Period Opens

Teams can submit for pre-evaluation on validation data

May 15, 2026

Final Evaluation Period Opens

Teams can submit Docker containers for final evaluation on test set

June 15, 2026

Registration Closes

Final deadline for team registration

August 15, 2026

Submission Deadline

Pre-evaluation and final evaluation close

August 15, 2026

Winners Announced

CodaBench leaderboard updated with test scores; top three winners announced

September 1, 2026

Organizing Committee

Meet the team behind the challenge

Christian Hinge

Christian Hinge

PhD Student, Rigshospitalet

Lead Organizer

Claes Nøhr Ladefoged

Claes Nøhr Ladefoged

Associate Professor, Technical University of Denmark

Co-Organizer

Flemming Littrup Andersen

Flemming Littrup Andersen

Professor, Rigshospitalet

Co-Organizer

Georg Schramm

Georg Schramm

Assistant Professor, KU Leuven

Co-Organizer

Vaibhav Bahel

Vaibhav Bahel

Student, Technical University of Denmark

Co-Organizer

Nanna Overbeck Petersen

Nanna Overbeck Petersen

PhD Student, Rigshospitalet

Co-Organizer

Kris Thielemans

Kris Thielemans

Professor, University College London

STIR Team

Charalampos Tsoumpas

Charalampos Tsoumpas

Professor, University of Groningen

STIR Team

Zekai Li

PhD Candidate, University of Groningen

STIR Team

Nikos Efthymiou

Nikos Efthymiou

Senior Scientist, University of Groningen

STIR Team

Felipe Delestro Matos

Felipe Delestro Matos

Senior Researcher, Technical University of Denmark

Infrastructure Team

Jakob Sauer Jørgensen

Jakob Sauer Jørgensen

Associate Professor, Technical University of Denmark

Infrastructure Team

Hans Martin Kjer

Hans Martin Kjer

Associate Professor, Technical University of Denmark

Infrastructure Team

Kirsten Korsholm

Kirsten Korsholm

MD, Rigshospitalet

Clinical Team

Ian Law

Ian Law

Professor, Rigshospitalet

Clinical Team

Resources & Documentation

Everything you need to participate in the challenge

Codabench

Official challenge platform with rules, and submissions:

  • Rules, details, and evaluation metrics
  • Submission instructions and leaderboard
Visit Codabench

GitHub

Code resources to get you started:

  • Baseline code and reconstruction containers
  • Validation metrics
Visit GitHub

How to Participate

Follow these three steps to get started.

1

Register Your Team

Fill out the team registration form to officially join the challenge.

Register Here
2

Sign Up on Codabench

Create an account and register for the challenge on the Codabench platform.

Go to Codabench
3

Download Dataset

Request access and download the challenge dataset from Hugging Face.

Get Data on Hugging Face

Questions or Need Help?

Check out our GitHub repository for documentation, FAQs, and community discussions.

Visit GitHub

Acknowledgements

We are grateful to the following organizations for their generous support of the BIC-MAC Challenge.