Anomaly inspection plays an important role in industrial manufacture. Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data. Although anomaly generation methods have been proposed to augment the anomaly data, they either suffer from poor generation authenticity or inaccurate alignment between the generated anomalies and masks. To address the above problems, we propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model, which utilizes he strong prior information of latent diffusion model learned from large-scale dataset to enhance the generation authenticity under few-shot training data. Firstly, we propose Spatial Anomaly Embedding, which consists of a learnable anomaly embedding and a spatial embedding encoded from an anomaly mask, disentangling the anomaly information into anomaly appearance and location information. Moreover, to improve the alignment between the generated anomalies and the anomaly masks, we introduce a novel Adaptive Attention Re-weighting Mechanism. Based on the disparities between the generated anomaly image and normal sample, it dynamically guides the model to focus more on the areas with less noticeable generated anomalies, enabling generation of accurately-matched anomalous image-mask pairs. Extensive experiments demonstrate that our model significantly outperforms the state-of-the-art methods in generation authenticity and diversity, and effectively improves the performance of downstream anomaly inspection tasks.
We first compare our model with DiffAug, CDC, Crop&Paste, SDGAN, DefectGAN and DFMGAN on anomaly generation quality (measured by Inception Score) and diversity (measured by Intra-cluster Pairwise LPIPS Distance)
We employ the generated data to help the downstream anomaly detection, localization, and classification tasks. For anomaly detection and localization, we train an UNet on our generated data (anomaly image-mask pairs). And for anomaly classification, we train a ResNet-18 on our generated data (anomaly image and class label). Then, we evaluate the perfoemance on the test dataset.
We employ the UNet trained on our generated data to compare with the state-of-the-art models that are specially designed for anomaly detection.
@inproceedings{hu2023anomalydiffusion,
title={AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model},
author={Hu, Teng and Zhang, Jiangning and Yi, Ran and Du, Yuzhen and Chen, Xu and Liu, Liang and Wang, Yabiao and Wang, Chengjie},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2024}
}