Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@drowe67
Copy link
Owner

@drowe67 drowe67 commented Nov 11, 2024

Following on from #25

@drowe67
Copy link
Owner Author

drowe67 commented Mar 19, 2025

Training with fading. Logging command lines here for now, for collation and inclusion in the BBFM README.

Generate 10 hours of fading samples:
octave:67> multipath_samples("lmr60",8000, 2000, 1, 10*60*60, "h_lmr60_train.f32")
Note 10 hours << 205 hours in training dataset. I think 10 hours or so might be the max I can generate due to memory limitations in the current Octave code (TBC). It should be enough, based on argument for dataset length in HF fading similar standards (which use the same model). You can also train with other speeds, e.g. 120 km/hr:
octave:93> multipath_samples("lmr120", 8000, 2000, 1, 10*60*60, "h_lmr120_train.f32")

Training with fading:
python3 ./train_bbfm.py --cuda-visible-devices 0 --sequence-length 400 --batch-size 512 --epochs 100 --lr 0.003 --lr-decay-factor 0.0001 --plot_loss ~/Downloads/tts_speech_16k_speexdsp.f32 250319_bbfm_lmr60 --RdBm -100 --plot_loss --range_RdBm --h_file h_lmr60_train.f32

Generating loss versus R plots for a trained model:
./train_bbfm.py --cuda-visible-devices 0 --sequence-length 400 --batch-size 512 --epochs 100 --lr 0.003 --lr-decay-factor 0.0001 ~/Downloads/tts_speech_16k_speexdsp.f32 tmp --initial-checkpoint 250319_bbfm_lmr60/checkpoints/checkpoint_epoch_100.pth --RdBm -100 --range_RdBm --plot_R 250319_bbfm_lmr60_awgn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant