Hero background

Big Cross-Modal
Attenuation Correction
Challenge

Synthesize pseudo-CT from NAC-PET, MRI, and 2D Topogram to enable CT-less reconstruction of PET images.

Rigshospitalet DEPICT Technical University of Denmark University of Copenhagen KU Leuven QIM SyneRBI

About the Challenge

To reconstruct an accurate total-body PET image, you need a CT scan to correct for photons being attenuated by dense tissue like bone. But CT adds radiation — undesirable for dose-sensitive populations like pediatric or obstetric patients — and combined PET/MRI scanners have no CT at all. In both scenarios, we need a robust generative algorithm that can synthesize CTs.

Your Task

Build a generative AI model that synthesizes a 3D pseudo-CT from whole-body MRI, non-attenuation-corrected PET (NAC-PET), and/or 2D X-ray-like Topograms. No knowledge of PET reconstruction is required to compete. The challenge is simply an image-to-image prediction task, and the reconstruction evaluation is handled for you.

You'll work with a novel, age- and sex-balanced dataset of total-body PET/CT/MR images and PET sinograms from 99 healthy volunteers, along with open-source STIR reconstruction containers — enabling closed-loop PET reconstruction accessible to anyone with Docker.

Diagram showing modalities workflow

What Makes This Challenge Unique?

Multimodal Fusion

Tackle the challenge of fusing spatially unaligned information from 3D MRI with 3D NAC-PET and 2D Topograms to produce a coherent 3D pseudo-CT.

Open-Source Reconstruction

Evaluate the attenuation correction effect of your predicted CTs directly with local PET reconstruction via STIR — no proprietary vendor software required.

Novel Dataset

Access a unique, age- and sex-balanced dataset with total-body PET/CT/MR images and PET sinograms from 99 healthy volunteers — purpose-built for this challenge.

Challenge Abstract

The advent of Long Axial Field-of-View (LAFOV) PET scanners has shifted the dosimetry paradigm in PET/CT imaging. The high sensitivity of these systems allows for substantial reductions in radiotracer activity, rendering the volumetric CT component the dominant source of ionizing radiation. For dose-sensitive populations such as pediatric and obstetric cohorts, eliminating the volumetric CT entirely is highly desirable. However, the CT serves a dual purpose: providing anatomical context and enabling attenuation correction (AC) for PET reconstruction, as the attenuation map is typically derived directly from the CT. Similarly, whole-body studies acquired on PET/MRI systems require estimation of the attenuation map from MR images. In both scenarios, the absence of a CT poses a reconstruction challenge.

To address this, the Big Cross-Modal Attenuation Correction (BIC-MAC) challenge tasks participants with synthesizing a 3D pseudo-CT from other available modalities. We present a novel multimodal dataset comprising whole-body PET, CT, Topogram (scout radiograph), and MRI for 99 healthy volunteers. The cohort is age- and sex-stratified, with data acquired on Siemens Biograph Vision Quadra and Siemens MAGNETOM Vida scanners. Participants will receive a training set of 75 cases containing Non-Attenuation Corrected (NAC) [18F]FDG PET images, scan-planning Topograms, and same-day DIXON MRI, alongside reference CT and CT-based attenuation-corrected PET (CTAC-PET) images. Critically, we also provide scatter maps, sinograms, and Docker containers with open-source reconstruction software, enabling closed-loop optimization on the training set - a capability previously restricted to hospital sites with access to proprietary vendor software.

The challenge comprises a single task: generate a pseudo-CT from the available input modalities. The pseudo-CT will be used to reconstruct PET images, which are then quantitatively compared against reference CTAC-PET images. Both static and dynamic PET reconstructions are evaluated to assess downstream accuracy across different clinical contexts. A defining technical characteristic of this challenge is the integration of modalities with different dimensionalities and acquisition geometries. While the 3D NAC-PET and 2D Topograms are spatially aligned with the target attenuation map, both lack anatomical detail. In contrast, whole-body MRI offers high bone and soft-tissue contrast but is acquired in a different scanner geometry with different patient positioning and body deformations. Consequently, participants must develop algorithms capable of fusing spatially unaligned information from 3D volumetric MRI with that of the 3D NAC-PET and 2D Topograms.

Important Dates

Registration Opens

Registration opens via Google Forms

April 1, 2026

Challenge starts (data released)

Training (n=75) and validation (n=4) cases released on Hugging Face

April 8, 2026

Pre-evaluation Period Opens

Teams can submit for pre-evaluation on validation data

May 15, 2026

Final Evaluation Period Opens

Teams can submit Docker containers for final evaluation on test set

June 15, 2026

Registration Closes

Final deadline for team registration

August 15, 2026

Submission Deadline

Pre-evaluation and final evaluation close

August 15, 2026

Winners Announced

CodaBench leaderboard updated with test scores; top three winners announced

September 1, 2026

Organizing Committee

Meet the team behind the challenge

Christian Hinge

Christian Hinge

PhD Student, Rigshospitalet

Lead Organizer

Claes Nøhr Ladefoged

Claes Nøhr Ladefoged

Associate Professor, Technical University of Denmark

Co-Organizer

Flemming Littrup Andersen

Flemming Littrup Andersen

Professor, Rigshospitalet

Co-Organizer

Georg Schramm

Georg Schramm

Assistant Professor, KU Leuven

Co-Organizer

Vaibhav Bahel

Vaibhav Bahel

Student, Technical University of Denmark

Co-Organizer

Nanna Overbeck Petersen

Nanna Overbeck Petersen

PhD Student, Rigshospitalet

Co-Organizer

Kris Thielemans

Kris Thielemans

Professor, University College London

STIR Team

Charalampos Tsoumpas

Charalampos Tsoumpas

Professor, University of Groningen

STIR Team

Zekai Li

PhD Candidate, University of Groningen

STIR Team

Nikos Efthymiou

Nikos Efthymiou

Senior Scientist, University of Groningen

STIR Team

Felipe Delestro Matos

Felipe Delestro Matos

Senior Researcher, Technical University of Denmark

Infrastructure Team

Jakob Sauer Jørgensen

Jakob Sauer Jørgensen

Associate Professor, Technical University of Denmark

Infrastructure Team

Hans Martin Kjer

Hans Martin Kjer

Associate Professor, Technical University of Denmark

Infrastructure Team

Kirsten Korsholm

Kirsten Korsholm

MD, Rigshospitalet

Clinical Team

Ian Law

Ian Law

Professor, Rigshospitalet

Clinical Team

Resources & Documentation

Everything you need to participate in the challenge

Codabench

Official challenge platform:

  • Rules
  • Validation set submissions
  • Live leaderboard
Visit Codabench

GitHub

Ressources for getting started:

  • HowTos, FAQ, hints, and reconstruction nice-to-know
  • Reconstruction containers
  • Baseline model and code
  • Scoring metrics
Visit GitHub

How to Participate

Follow these three steps to get started.

1

Sign Up on Codabench

Create an account and register for the challenge on the Codabench platform.

Go to Codabench
2

Register Your Team

Fill out the team registration form to officially join the challenge.

Register Here
3

Download Dataset

Request access and download the challenge dataset from Hugging Face.

Get Data on Hugging Face

Questions? Contact us at bic-mac-challenge@outlook.com

Acknowledgements

We are grateful to the following organizations for their generous support of the BIC-MAC Challenge.