Thanks to visit codestin.com
Credit goes to Github.com

Skip to content
Kris Thielemans edited this page Jan 25, 2026 · 32 revisions

PETRIC 2: Second PET Rapid Image reconstruction Challenge

website register leaderboard discord

Main organisers: Matthias Ehrhardt (U Bath), Christoph Kolbitsch (PTB), Charalampos Tsoumpas (RU Groningen), Kris Thielemans (UCL).

Technical support (CoSeC, UKRI STFC): Casper da Costa-Luis, Edoardo Pasca

Overview

  • Develop a PET reconstruction algorithm to estimate a maximum a-posteriori solution (MAP) using the smoothed relative difference prior (RDP)
  • Phantom data with low count levels from different scanners are available for development
  • An example repository on GitHub with an implementation of a reference algorithm will be provided
  • Make sure your algorithm is the fastest to reach our target image quality
  • Win cash prizes at the Symposium on AI & Reconstruction for Biomedical Imaging
  • Time frame: 17 November 2025 - 15 February 2026

Latest updates: See our Recent updates page

Overall description

The second PET Rapid Image reconstruction Challenge (PETRIC2) aims to advance research in the development of fast positron emission tomography (PET) image reconstruction algorithms that are suitable for real-world clinical data.

Background

Building on the success of the first PETRIC challenge [0] and the clinical adoption of regularised image reconstruction methods in PET and other imaging modalities, PETRIC2 focuses on a smoothed version of the relative difference prior [1]. The event seeks to encourage innovation that balances reconstruction accuracy and computational efficiency.

Data & Task

Participants will be provided with a large collection of low-statistics phantom datasets acquired from various clinical PET scanners. The core task is to develop algorithms that can reconstruct images as close as possible to the converged reference image (for example, in terms of mean volume of interest standardised uptake value, or SUV), while doing so in the shortest possible computation time. An example solution that achieves convergence but requires significant computation time will be made available at the start of the challenge. This will serve as a baseline for comparison.

Tools & Resources

To lower the barrier to entry, the PET raw data will be pre-processed, allowing participation even from researchers with limited experience in handling real-world PET data. The open-source toolkits SIRF 3.9 [2], STIR 6.3 [3], and CIL 25.0 [4] are provided to support algorithm development and testing. All implementations must use the specified SIRF projector, along with the provided multiplicative and additive projection data. This requirement ensures that image quality and reconstruction speed depend solely on the participants’ algorithms, rather than differences in projection modelling.

Open Science and Participation

In keeping with the principles of open science, participants who wish to be eligible for cash prizes must make their GitHub repositories publicly accessible under an open-source license that complies with the Open Source Initiative (OSI) definition after the challenge. To encourage broad participation, teams are also welcome to take part without releasing their code publicly, although such entries will not qualify for monetary awards (see below for further details). All teams are required to submit a README.md summary abstract of up to 1,000 words describing their algorithm, its underlying methodology, and key design choices.

Awards

The three highest-ranked teams in the PETRIC2 Challenge will be invited to present their submission at the Symposium on AI & Reconstruction for Biomedical Imaging, which will take place in London from 9–10 March 2026, where we will announce the final ranking. Travel costs for speaker of each team will be reimbursed.

In addition, the top three teams that release their solutions as open source will each receive a monetary award distributed to the entire team:

  1. £700
  2. £350
  3. £200

Data

All data used in this challenge is from phantoms acquired on clinical PET scanners.

Training Data

For PETRIC2, the challenge will reuse the data from the first PETRIC. In this edition, PET sinogram data will be generated from fewer counts, meaning that participants’ algorithms will need to handle increased noise levels in the input data.

The (training) low-count datasets were created through sinogram bootstrapping, producing new noise realisations from the existing PETRIC data. The noise scaling factors were selected to yield images of comparable noise levels when reconstructed using a simple OSEM (Ordered Subsets Expectation Maximisation) algorithm. Further details about the bootstrapping procedure can be found in noise_bootstrap.py, and the scripts used for image quality assessment are available in data_QC.py.

The training data can be found here https://petric.tomography.stfc.ac.uk/2/data/.

Evaluation Data

The final competition datasets, used for evaluation and ranking, will be collated after the challenge at multiple sites using Siemens, GE, Positrigo and Mediso clinical scanners. This approach helps to reduce bias towards a specific vendor or scanner model and ensures fair participation, since no team - including the organisers - will have access to the final ground truth data during the competition.

Data Contribution & Accessibility

Independent sites are encourage to contribute acquired raw phantom data (together with defined Regions of Interest) to help expand the final testing dataset. More information for potential data contributors is available here.

All phantom datasets will be made publicly available after the challenge. Participants are encouraged to test their algorithms on additional datasets, such as those available through the Zenodo SyneRBI Community.

Due to challenges associated with data-sharing agreements, no patient data will be included in the current challenge.

Timeline

  • Start: PETRIC2 starts on 17 November 2025. Example code and datasets will be available from this point at https://github.com/SyneRBI/PETRIC2.
  • Finish: PETRIC2 closes on 15 February 2026 23:59 (GMT). Only submissions that fulfil the requirements listed below will be accepted.
  • Last date to make repository open access to qualify for monetary award: 28 February 2026 23:59 (GMT).
  • Announcement of final ranking: 9 March 2026.
  • Award Ceremony: Details will follow soon.

Spirit of the competition

The spirit of the competition is that the algorithm is a general-purpose algorithm, capable of reconstructing clinical PET data. The organising committee has the right to disqualify any algorithms trying to violate that spirit.

Steering Committee:

Steering Panel of CCP SyneRBI

Detail on example script, submission procedure and metrics

Examples

We provide several examples in https://github.com/SyneRBI/PETRIC2/ called main_*.py (note that main_OSEM.py would clearly not converge to the required MAP solution). These examples can be used as a template for your own modifications and give some indication of how fast your own algorithm is. In addition, you could check the submissions to PETRIC1, more information is on the PETRIC1. A Docker container with all installed code will also be made available.

Submission via GitHub

The code MUST run on Python 3.12 using SIRF 3.9. We will provide a private template repository for each team to work on.

Note

Teams can submit up to 3 reconstruction algorithms to the challenge (on separate tags).

Each tag in your repository must contain:

  • a main.py file containing:
    • class Submission which inherits from the CIL class Algorithm (see the scripts)
    • submission_callbacks: list (which could be an empty list)
  • a README.md file with at least the following sections:
    • Author(s) & affiliation(s)
    • Brief description of your algorithm.

During the challenge, your pushed code will be automatically run and results will be posted on the public leaderboard. This also allows you to troubleshoot your code. If you discover problems with our set-up, please create an "Issue"

Computational Resources

We will run all reconstruction algorithms on the STFC cloud computing services platform. Each server will have an AMD EPYC 7452 32-Core CPU and NVIDIA A100-40GB GPU running under Ubuntu 22.04 with CUDA 12.8. Data (e.g. weights of a pre-trained network) can be downloaded before running the reconstruction algorithm but will be limited to 1 GB.

Evaluation

Among all submitted entries, the fastest algorithm to reach the target image quality will be identified. To ensure fairness and comparability, all algorithms will be executed until either:

  1. The relative error across all selected metrics falls below a specified threshold, or
  2. The runtime exceeds one hour.

For each metric, the results from all teams will be ranked according to the wall-clock time required to reach the defined error threshold on a standard computing platform. Rankings will range from worst (1) to best (N), where best indicates the fastest algorithm to reach the threshold. The overall rank for each algorithm will be calculated as the sum of its metric-specific ranks across all datasets. The algorithm achieving the highest total rank will be declared the winner of the challenge. Because of possible variations in wall-clock timing and the use of stochastic algorithms, each reconstruction will be executed ten times, and the median wall-clock time will be used in the final ranking.

Optimisation Problem Details

The optimisation problem is a maximum a-posteriori estimate (MAP) using the smoothed relative difference prior (RDP), i.e.

$$\widehat{x} = \underset{x \in C}{\arg\max}{\ L(x) - \beta R(x)}$$

where the constraint set $C$ is defined as $x \in C$ iff $x_i \geq 0$ if $i \in M$ and $0$ otherwise for some mask $M$ provided. $\beta > 0$ is the regularization parameter given for each data set. Note that SIRF defines the objective function as above, but CIL algorithms minimise a function. The log-likelihood (up to terms independent of the image) is

$$L( \mathbf{y}; \widehat{\mathbf{y}} ) = \sum_{k}{y_{k}\log{\widehat{y}_k}-\widehat{y}_k} $$

with $\mathbf{y}$ a vector with the acquired data (histogrammed), and $\widehat{\mathbf{y}}$ the estimated data for a given image $\mathbf{x}$

$$\widehat{\mathbf{y}} = D\left( \mathbf{m} \right)\left( A\mathbf{x + a} \right)$$

with $\mathbf{m}$ multiplicative factors (corresponding to detection efficiencies and attenuation), $\mathbf{a}$ an “additive” background term (corresponding to an estimate of randoms and scatter, precorrected with $\mathbf{m}$), $A$ an approximation of the line integral operator [6], and $D(.)$ an operator converting a vector to a diagonal matrix.

Due to PET conventions, for some scanners, some data bins will always be zero (corresponding to “virtual crystals”), in which case corresponding elements in $\mathbf{m}$ will also be zero. The corresponding term in the log-likelihood is defined as zero. Other elements in $\mathbf{a}$ are guaranteed to be (strictly) positive ($a_{i} > 0$).

The smoothed Relative Difference Prior is given by:

$$ R\left( \mathbf{x} \right) = \frac{1}{2}\sum_{i = 1}^{N}{\sum_{j \in N_{i}}^{}{w_{ij}\kappa_{i}\kappa_{j}\frac{\left( x_{i} - x_{j} \right)^{2}}{x_{i} + x_{j} + \gamma\left| x_{i} - x_{j} \right| + \epsilon}}} $$

with

  • $N$ the number of voxels,

  • $N_{i}$ the neighbourhood of voxel $i$ (here taken as the 8 nearest neighbours in the 3 directions),

  • $w_{ij}$ weight factors (here taken as “horizontal” voxel-size divided by Euclidean distance between the $i$ and $j$ voxels),

  • $\mathbf{\kappa}$ an image to give voxel-dependent weights (here predetermined as an approximation of the sqrt of minus the row-sum of the Hessian of the log-likelihood at an initial OSEM reconstruction, see eq. 25 in [7], but less sensitive to noise, see here),

  • $\gamma$ an edge-preservation parameter (here taken as 2),

  • $\epsilon$ a small number to ensure smoothness (here predetermined from an initial OSEM reconstruction)

Metrics and thresholds

Each dataset contains:

  • $r$: (converged LBFGSB-PC) reference image
  • $W$: (marginally eroded) whole object VOI (volume of interest)
  • $B$: background VOI
  • $R_i$: one or more VOIs (“tumours”, “spheres”, “white/grey matter”, etc.)

metric calculations:

leaderboard metric name calculation & threshold
whole object RMSE $\frac{RMSE(\theta; W)}{MEAN(r; B)} < 0.01$
background RMSE $\frac{RMSE(\theta; B)}{MEAN(r; B)} < 0.01$
VOI AEM (absolute error of the mean) $\frac{\left|MEAN(\theta; R_i) - MEAN(r; R_i)\right|}{MEAN(r; B)} < 0.005$

where:

  • $\theta$: your candidate reconstructed image
  • $RMSE(\cdot; W)$: voxel-wise root mean squared error computed in region $W$ with respect to the reference $r$
  • $MEAN(\cdot; R_i)$: mean for region $R_i$

Reference algorithm

The reference algorithm for PETRIC2 is a preconditioned limited-memory Broyden–Fletcher–Goldfarb–Shanno method with boundary constraints (LBFGSB-PC), inspired by [8]. This algorithm converges to the solution of the maximum a posteriori (MAP) reconstruction problem, providing a reliable benchmark for assessing performance. However, the LBFGSB-PC method typically requires a relatively large number of iterations to reach convergence, and does not use subsets, which makes it slower compared to the more efficient implementations that participants are expected to develop. An example of PET image reconstruction using LBFGSB-PC in SIRF is available in the following notebook. The actual implementation for the preconditioner we used for PETRIC2 is here.

Example submission

To help you get started we have already created an example submission. Of course this will most likely not win the challenge but hopefully give you an idea of how to implement your own algorithm with the framework of PETRIC2. Check our page with more information on the software available.

Support

Summary

  • Prizes are available for the top ranked 3 teams who make their code publicly available.
  • Submitted algorithms may use 1 GB amount of data included in the repository
  • Submissions need to be based on SIRF and use Python
  • Submissions are via a private GitHub repository
  • The evaluation will be performed as described above.

References

[0] da Costa-Luis, C., Ehrhardt, M. J., Kolbitsch, C., Ovtchinnikov, E., Pasca, E., Thielemans, K., & Tsoumpas, C. (2025) PET Rapid Image Reconstruction Challenge (PETRIC). arXiv#2511.22566.

[1] Nuyts, J., Bequé, D., Dupont, P., & Mortelmans, L. (2002). A Concave Prior Penalizing Relative Differences for Maximum-a-Posteriori Reconstruction in Emission Tomography. IEEE Transactions on Nuclear Science, 49(1), 56–60.

[2] Evgueni Ovtchinnikov, Richard Brown, Christoph Kolbitsch, Edoardo Pasca, Casper da Costa-Luis, Ashley G. Gillman, Benjamin A. Thomas, Nikos Efthymiou, Johannes Mayer, Palak Wadhwa, Matthias J. Ehrhardt, Sam Ellis, Jakob S. Jørgensen, Julian Matthews, Claudia Prieto, Andrew J. Reader, Charalampos Tsoumpas, Martin Turner, David Atkinson, Kris Thielemans (2020) SIRF: Synergistic Image Reconstruction Framework, Computer Physics Communications 249, doi: https://doi.org/10.1016/j.cpc.2019.107087. https://github.com/SyneRBI/SIRF/

[3] Thielemans, K., Tsoumpas, C., Mustafovic, S., Beisel, T., Aguiar, P., Dikaios, N., Jacobson, M.W., 2012. STIR: software for tomographic image reconstruction release 2. Physics in Medicine and Biology 57, 867--883. https://doi.org/10.1088/0031-9155/57/4/867 https://github.com/UCL/STIR/

[4] Jørgensen, J.S., Ametova, E., Burca, G., Fardell, G., Papoutsellis, E., Pasca, E., Thielemans, K., Turner, M., Warr, R., Lionheart, W.R.B., Withers, P.J., 2021. Core Imaging Library - Part I: a versatile Python framework for tomographic imaging. Phil Trans Roy Soc A 379, 20200192. https://doi.org/10.1098/rsta.2020.0192 https://github.com/TomographicImaging/CIL

[5] Tsai, Y.-J., Bousse, A., Ehrhardt, M. J., Stearns, C. W. , Ahn, S., Hutton, B. F., Fast Quasi-Newton Algorithms for Penalized Reconstruction in Emission Tomography and Further Improvements via Preconditioning, IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 37, NO. 4, APRIL 2018, https://doi.org/10.1109/TMI.2017.2786865

[6] Schramm, G., Thielemans, K., 2024. PARALLELPROJ—an open-source framework for fast calculation of projections in tomography. Front. Nucl. Med. 3. https://doi.org/10.3389/fnume.2023.1324562

[7] Tsai, Y.-J., Schramm, G., Ahn, S., Bousse, A., Arridge, S., Nuyts, J., Hutton, B.F., Stearns, C.W., Thielemans, K., 2020. Benefits of Using a Spatially-Variant Penalty Strength With Anatomical Priors in PET Reconstruction. IEEE Transactions on Medical Imaging 39, 11–22. https://doi.org/10.1109/TMI.2019.2913889

[8] Y.-J. Tsai et al., ‘Fast Quasi-Newton Algorithms for Penalized Reconstruction in Emission Tomography and Further Improvements via Preconditioning’, IEEE Transactions on Medical Imaging, p. 1, 2017, doi: 10.1109/tmi.2017.2786865.