Thanks to visit codestin.com
Credit goes to github.com

Skip to content

LiXUEG/repo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Magmaw

Official implementation for Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems.

We validate Magmaw through simulation, and then thoroughly conduct a real-world evaluation with Software-Defined Radio (SDR).

Prerequisite

Magmaw is implemented with Python 3.7 and PyTorch 1.7.1. We manage the development environment using Conda.

Please go ahead and execute the following commands to configure the development environment.

  • Create a conda environment called Magmaw based on Python 3.7, and activate the environment.
    conda create -n Magmaw python=3.7 --file requirements.txt
    conda activate Magmaw
    git clone hhttps://github.com/Magmaw/Magmaw.git

Dataset

- Test Datasets for Image JSCC and JSCC

We evaluate the image JSCC and video JSCC models using the UCF-101 dataset from this repo.

- Test Datasets for Speech JSCC and Text JSCC

We evaluate the speech JSCC using Edinburgh DataShare.

We select the proceedings of the European Parliament to evaluate the text JSCC.

These datasets can be downloaded here.

- Path Configuration

Please edit the paths for the dataset in configs/config.py.

Checkpoints

The checkpoints can be downloaded here.

Please edit the paths for checkpoints in configs/config.py.

Usage for Simulation Results

To run the black-box attack on the multimodal JSCC models:

python black_box_attack.py

Simulation Results

psr: -16 cd_rate: 1 mod: QAM degree: 16
Attack: random, Morality video, RX psnr : 32.92, RX msssim: 0.909
Attack: black, Morality video, RX psnr : 27.72, RX msssim: 0.805
Attack: random, Morality image, RX psnr : 34.08, RX msssim: 0.925
Attack: black, Morality image, RX psnr : 28.10, RX msssim: 0.823
Attack: random, Morality speech, RX MSE: 0.0000550
Attack: black, Morality speech, RX MSE: 0.0006523
Attack: random, Morality text, RX BLEU_1g: [0.92747291]
Attack: random, Morality text, RX BLEU_2g: [0.8617476]
Attack: random, Morality text, RX BLEU_3g: [0.79970497]
Attack: random, Morality text, RX BLEU_4g: [0.73994322] 
Attack: black, Morality text, RX BLEU_1g: [0.48864823]
Attack: black, Morality text, RX BLEU_2g: [0.2488876]
Attack: black, Morality text, RX BLEU_3g: [0.13983744]
Attack: black, Morality text, RX BLEU_4g: [0.08387463]

SDR Implementation

The block diagram of our SDR implementation is presented in the above figure. Following the above block diagram, we construct the legitimate transmitter, legitimate receiver, and adversarial transmitter.

We utilize GNURadio software package to control USRP SDRs.

We follow the steps below.

  • We first store the symbols encoded by multimodal JSCC in a txt file.

  • Then, we feed the stored OFDM symbols to the OFDM transmitter to send the radio signal over the air.

  • The OFDM receiver converts the received signals into complex-valued symbols, and the JSCC decoder restores them to the original data.

  • Note that the power of signal transmission is controlled by adjusting the signal amplitude during the signal generation process.

We show one of the real-world attack scenarios in the above Figure.

Real-World Data

Using GNURadio, we stored the index of constellation points generated after demodulation.

We also store the reference data to compare the results.

Magmaw
├── SDR_results
    ├── black 
    │   ├── image
    |   │       ├── ori.txt   /* original input */
    |   |       ├── ref.txt   /* simulated output from JSCC */
    |   |       ├── wir.txt   /* real-world output */
    ├── no
    │   ├── image
    |   │       ├── ori.txt   /* original input */
    |   |       ├── ref.txt   /* simulated output from JSCC*/
    |   |       ├── wir.txt   /* real-world output */

Usage for SDR results

To send the real-world data to the JSCC models:

python evaluate_SDR.py

SDR Results

Validation, Attack: no, Morality: image, TX psnr : 34.37, TX msssim: 0.940, RX psnr : 33.30, RX msssim: 0.927
Validation, Attack: black, Morality: image, TX psnr : 34.37, TX msssim: 0.940, RX psnr : 25.81, RX msssim: 0.724

NDSS'25 Publication

If you have used Magmaw to develop a research work or product, please cite our paper:

@article{chang2023magmaw,
  title={Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems},
  author={Chang, Jung-Woo and Sun, Ke and Heydaribeni, Nasimeh and Hidano, Seira and Zhang, Xinyu and Koushanfar, Farinaz},
  journal={arXiv preprint arXiv:2311.00207},
  year={2023}
}

Detection Framework Report Plan

Additional Planning Documents

ML UAP Detector (JSON + CSV)

  • Train ML detector with combined JSON and PPDU CSV features (PPDU mandatory):
    • JSON: detection_details_SNR20_QAM16_*.json
    • CSV: ppdu_feature_table_*.csv

Run:

python ml_uap_detector.py

Outputs:

  • metrics_overall.json and metrics_overall.csv (includes tpr, tnr, fpr, precision_attack, auc, and per-label precision/recall)
  • ml_detector_roc.png (ROC curve)
  • ml_detector_shap_summary_bar.png, ml_detector_shap_beeswarm.png (if shap installed)
  • ml_uap_detector.pkl, ml_uap_scaler.pkl

Evaluate vs. rule-based detector on JSON details:

python evaluate_with_ml_detector.py

This saves:

  • metrics_overall_eval.json/csv and metrics_by_modality.csv (includes TPR/TNR/AUC per modality)

End-to-End Runtime (No Code Changes)

Measure total wall-clock time for training, evaluation, or the full pipeline using the provided wrapper:

chmod +x measure_runtime.sh

# Training only
./measure_runtime.sh --train-only

# Evaluation only
./measure_runtime.sh --eval-only

# Full pipeline (train then eval) and repeat 3 times
./measure_runtime.sh --both --repeat 3

# Custom CSV output path
./measure_runtime.sh --both --csv my_runtime.csv

Appends rows to runtime_results.csv (or custom path) with columns:

  • timestamp, script, runtime_sec

Runtime Profiling

You can profile the ML detector runtime (feature build + scale + predict) and log results to CSV. The current ML detector uses scikit-learn LogisticRegression (CPU). The --device flag is recorded for bookkeeping (e.g., cpu or cuda:0) but does not change execution.

Examples:

# Profile with batches of 64, warm up 3 batches, limit to 1000 samples
python evaluate_with_ml_detector.py --profile --batch-size 64 --warmup 3 --max-samples 1000

# Profile only (skip metrics report) and write to a custom CSV
python evaluate_with_ml_detector.py --profile --no-eval --profile-csv timing_results.csv --batch-size 32

Outputs:

  • timing_results.csv with columns: timestamp, device, model, batch_size, num_samples, warmup_batches, io_ms, feature_build_ms, scale_ms_total, predict_ms_total, total_ms, latency_ms_mean, latency_ms_p50, latency_ms_p95, throughput_samples_per_s.

Notes:

  • Feature names are normalized to lowercase underscore for CSV↔JSON alignment (e.g., R1_mean -> r1_mean).
  • SHAP requires shap==0.41.x (see requirements.txt). LIME is optional and not required for training.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors