The code repository of "Unlabeled Data Assisted Domain Adaptation for Cross-scene Image Classification"
Domain adaptation (DA) is crucial in cross-scene image classification, enabling models to generalize across domains with varying data distributions. Existing approaches rely on abundant and diverse labeled source data to learn discriminative and transferable features for cross-domain alignment. However, such labeled data are often expensive and limited in remote sensing applications. In contrast, abundant task-relevant unlabeled data are more accessible but remain underutilized, despite containing domain-specific feature distributions that can enhance feature learning. To address this gap, we propose an Unlabeled data Assisted Domain Adaptation (UADA) framework for cross-scene image classification. UADA incorporates task-relevant unlabeled data as an auxiliary source alongside labeled source data to enrich feature diversity and improve the model’s adaptability to the target domain. Specifically, we introduce a progressive pseudo-label optimization strategy that iteratively refines pseudo-labels for unlabeled data through confidence-aware self-labeling. We then employ weight-shared feature extractors to jointly encode labeled and unlabeled source data, enabling the model to learn a unified feature space that captures diverse semantic representations for robust feature alignment. Finally, we construct domain-specific classifiers for each source and adaptively fuse their predictions, effectively harnessing complementary semantic cues for robust target classification. Extensive experiments across multiple tasks show that UADA outperforms existing methods.
- Python >= 3.8
- PyTorch >= 1.12
- torchvision
- numpy
- scikit-learn
- matplotlib
Install dependencies via:
pip install -r requirements.txt
-
Download the datasets
https://drive.google.com/file/d/1bQTX3TOE3SSXnnOy5F3rdzVB_MivIXPo/view?usp=drive_link -
Organize the datasets:
./data/AID/
├── Airfield/
│ ├── airport_1.jpg
│ ├── airport_2.jpg
│ ├── ...
├── Anchorage/
│ ├── port_1.jpg
│ ├── port_2.jpg
│ ├── ...
├── ...
Run the code: train.py