I tested my reconstruction pipeline using the LEAP (LivermorE AI Projector) framework, and it works well with the official walnut dataset provided in the LEAP examples — the reconstruction result is clean and as expected.
However, when I switch to the "High-resolution cone-beam scan of twenty-one walnuts" dataset from Zenodo (https://zenodo.org/record/3763412), the reconstruction quality degrades significantly. The volume appears blurry and shows severe artifacts, even though I applied the same FDK-based reconstruction pipeline, including cosine weighting and Ram-Lak filtering.
I suspect the poor result may be due to one or more of the following:
Mismatch in geometry configuration (e.g., source-origin distance, source-detector distance, voxel size, or detector size).
Differences in projection angle definitions, projection order, or rotation direction.
Missing preprocessing steps (e.g., log transform of the projections or normalization).
Or inconsistencies in file layout, such as order of .tif files not matching angle indexing.
Has anyone successfully used this Zenodo walnut dataset with LEAP? Are there any recommended parameters or preprocessing steps?