Thanks to visit codestin.com
Credit goes to github.com

Skip to content

JohanLi233/VibyTalk

Repository files navigation

VibyTalk

Demo

Video Demo

Direct inference in web browser

demo.mp4

Quick Start

0. Install

Install UV then run

uv sync

1. Prepare Video Data

Record a 3-minute video of yourself speaking clearly. Ensure good lighting and audio quality or use synthesized data.

2. Process Data

Put this video to a new dir.

python data_utils/process.py /path/to/new_dir

3. Train Model

python train.py --dataset_dir /path/to/new_dir --model_size nano --save_dir ./checkpoint

4. Export Model

python export_onnx.py --checkpoint ./checkpoints/nano_300.pth --output model.onnx --model_size nano

5. Deploy

Real-time Mode

python realtime.py --dataset /path/to/new_dir --wav_path ./processed_data/aud.wav --onnx_model model.onnx --model_size nano

Extract Dataset for Web

python extract_dataset_data.py --dataset /path/to/new_dir --model_size nano

Web Interface

cd web
pnpm install
pnpm run dev

About

Digital Human In Broswers

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published