Lada is a tool designed to recover pixelated adult videos (JAV). It helps restore the visual quality of such content, making it more enjoyable to watch.
- Recover Pixelated Videos: Restore pixelated or mosaic scenes in adult videos.
- Watch/Export Videos: Use either the CLI or GUI to watch or export your restored videos.
After opening a file, you can either watch the restored via in realtime or export it to a new file to watch it later:
Additional settings can be found in the left sidebar.
You can also use the command-line interface (CLI) to restore video(s):
lada-cli --input <input video path>For more information about additional options, use the --help argument.
Tip
Lada writes the restored video to a temporary file before combining it with the audio stream from the original file and saving it to the selected destination.
You can overwrite the default location by setting the TMPDIR environment variable to another location of you choice.
Lada utilizes specialized models for the two main steps of the processing pipeline: Detection and Restoration. You can choose different models for each task.
Mosaic Restoration Models:
- basicvsrpp-v1.2 (Default) A general-purpose model trained on diverse video scenes. Delivers mostly good results.
- deepmosaics: Restoration model from the project DeepMosaics. Worse quality than basicvsrpp-v1.2.
Note
The DeepMosaics model should be worse in most/all scenarios. It’s integrated because the DeepMosaics project is not maintained anymore, and I wanted to provide an easy way to try it out and compare.
Mosaic Detection Models:
- v3.1-fast (Default): Fast and efficient.
- v3.1-accurate: Slightly more accurate than v3.1-fast, but slower. Not always better than v2.
- v2: Slowest of all but often provides better mosaic detection than v3.1-accurate but YMMV.
You can configure the models in the side panel, or when using the CLI by specifying path and type of the model as arguments.
Don't expect this to work perfectly, some scenes can be pretty good and close to the real thing. Other scenes can be rather meh and show worse artifacts than the original mosaics.
You'll need a GPU and some patience to run the app. If your card has at least 4-6GB of VRAM then it should work out of the box.
The CPU is used for encoding the restored video so shouldn't be too slow either. But you can also use GPU encoding and run both the restoration and encoding tasks on the GPU.
The app also needs quite a bit of RAM for buffering to increase throughput. For 1080p content you should be fine with 6-8GB RAM, 4K will need a lot more.
To watch the restored video in realtime you'll need a pretty beefy machine or you'll see the player pausing and buffering until next restored frames are computed. When viewing the video no encoding is done but it will use more additional RAM for buffering.
If your GPU is not fast enough to watch the video in real-time you'll have to export it first and watch it later with your favorite media player (available in GUI and CLI).
Technically running the app on your CPU is also supported but it will be so slow that it's not really practical.
Here are some speed performance numbers using Lada v0.7.0 on my available hardware to give you an idea what to expect (used libx264/CPU codec with default settings; RTX 3090 results are limited by CPU encoding and could be a lot faster by switching to NVENC/GPU encoder):
| Video name | Video description | Video duration / resolution / FPS |
Lada runtime / FPS Nvidia RTX 3050 (Laptop GPU) |
Lada runtime / FPS Nvidia RTX 3090 (Desktop GPU) |
|---|---|---|---|---|
| vid1 | multiple mosaic regions present on all frames | 1m30s / 10920x1080 / 30 FPS | 3m36s / 12 FPS | 1m33s / 30 FPS |
| vid2 | single mosaic region present on all frames | 3m0s / 1920x1080 / 30 FPS | 4m11s / 21 FPS | 2m16s / 39 FPS |
| vid3 | half of the video doesn't have any mosaics present, the other half mostly single mosaic per frame |
41m16s / 852x480 / 30 FPS | 26m30s / 46 FPS | 10m20s / 119 FPS |
The easiest way to install the app (CLI and GUI) on Linux is via Flathub:
Note
The Flatpak only works with x86_64 CPUs and Nvidia/CUDA GPUs (Turing or newer: RTX 20xx up to including RTX 50xx). Ensure your NVIDIA GPU driver is up-to-date. It can also be used without a GPU but it will be very slow.
Tip
After installation you should find Lada in your application launcher to start the GUI. You can also run it via flatpak run io.github.ladaapp.lada.
Tip
When using the CLI via Flatpak we need to make the file/directory available by giving it permission to the file system so it can access the input/output files
flatpak run --filesystem=host --command=lada-cli io.github.ladaapp.lada --input <input video path>You may want to set an alias to make it easier to use
alias lada-cli="flatpak run --filesystem=host --command=lada-cli io.github.ladaapp.lada"You could also give the filesystem permission permanently via Flatseal
Tip
If you installed Lada from Flathub and drag-and-drop doesn't work, your file browser might not support File Transfer Portal. You can fix this by:
- Switching or updating your file browser to one that supports it.
- Granting the app filesystem permissions (e.g., via Flatseal so it can read files directly).
- Using the 'Open' button to select the file instead of drag-and-drop.
The app is also available via Docker (CLI only). You can get the image ladaapp/lada from Docker Hub with this command:
docker pull ladaapp/lada:latestNote
The Docker image only works with x86_64 CPUs and Nvidia/CUDA GPUs (Turing or newer: RTX 20xx up to including RTX 50xx). Ensure your NVIDIA GPU driver is up-to-date. It can also be used without a GPU but it will be very slow.
Tip
Make sure that you have installed the NVIDIA Container Toolkit on your system so Docker can pass through the GPU
Tip
When using Docker you'll need to make the file/directory available to the container as well as the GPU:
docker run --rm --gpus all --mount type=bind,src=<input video path>,dst=/mnt ladaapp/lada:latest --input "/mnt/<input video file>"Tip
If you want to use hardware encoders like hevc_nvenc you have to provide the container with video capability.
With docker run you can use --gpus 'all,"capabilities=compute,video"'. Learn more here.
For Windows users, the app (CLI and GUI) is packaged as a standalone .7z archive file. You'll need 7-zip to unpack the files. It is recommended to validate the file after downloading. See the Tip below.
Get the latest release from the Releases Page.
You'll find ´lada.exe´ and ´lada-cli.exe´ after extracting the archive.
Note
The Windows release only works with x86_64 CPUs and Nvidia/CUDA GPUs (Turing or newer: RTX 20xx up to including RTX 50xx). Ensure your NVIDIA GPU driver is up-to-date. It can also be used without a GPU but it will be very slow.
Note
Be aware that the first start of lada.exe or lada-cli.exe could take a while before Windows Defender or your AV has scanned it. The next time you open the program it should start fast.
Tip
It is recommended to compare the checksum of the downloaded file against the value you'll find in the release announcement. This makes sure that you got the correct and unaltered file, especially important if you got the file from an unofficial source.
Calculate the checksum of the downloaded file on your computer and compare it against the SHA256 value you'll find in the release announcement. They must be the same!
You can do this with Powershell Get-FileHash /path/to/file.7z or QuickHash-GUI.
If the packages above don't work for you then you'll have to follow the Build steps to set up the project.
Note that these instructions are mostly intended for developers to set up their environment to start working on the source code. But you should hopefully be able to follow the instructions even if you aren't a developer.
All packages currently only work with Nvidia cards (or CPU) but there have been reports that, following the Build instructions, newer Intel Xe GPUs and AMD ROCm-compatible cards work as well.
Reach out if you can support packaging the app for other operating systems or hardware.
If you want to start hacking on this project you'll need to install the app from source. Check out the detailed installation guides for Linux and Windows.
For instructions on training your own models and datasets, refer to Training and dataset creation.
If you want to help translating the app you can contribute to existing translations or set up a new language over at Codeberg Translate.
Source code and models are licensed under AGPL-3.0. See the LICENSE.md file for full details.
This project builds upon work done by these fantastic individuals and projects:
- DeepMosaics: Provided code for mosaic dataset creation. Also inspired me to start this project.
- BasicVSR++ / MMagic: Used as the base model for mosaic removal.
- YOLO/Ultralytics: Used for training mosaic and NSFW detection models.
- DOVER: Used to assess video quality of created clips during the dataset creation process to filter out low-quality clips.
- DNN Watermark / PITA Dataset: Used most of its code for creating a watermark detection dataset used to filter out scenes obstructed by text/watermarks/logos.
- NudeNet: Used as an additional NSFW classifier to filter out false positives by our own NSFW segmentation model
- Twitter Emoji: Provided eggplant emoji as base for the app icon.
- Real-ESRGAN: Used their image degradation model design for our mosaic detection model degradation pipeline.
- BPJDet: Model for detecting human body and head. Used for creating SFW mosaics so that mosaic detection model can be trained so skip such material.
- CenterFace: Model for detecting human faces. Used for creating SFW mosaics so that mosaic detection model can be trained so skip such material.
- PyTorch, FFmpeg, GStreamer, GTK and all other folks building our ecosystem