newbie trainer is a training toolkit designed specifically for the Newbie AI ecosystem.
It supports parameter-efficient fine-tuning of Newbie base models and currently provides two training modes: LoRA and LoKr. The goal is to balance output quality with lower VRAM and compute requirements, so you can quickly get started on both local machines and servers.
The goal of this trainer is to provide Newbie AI users with a solution that is:
- Easy to use: Complete training workflows via configuration files and simple command-line interfaces.
- Highly adapted: Customized and optimized for Newbie model structures and characteristics.
- Extensible: Friendly to secondary development and easy to integrate into your own pipelines (e.g., ComfyUI workflows, batch generation scripts, etc.).
If you are already using Newbie inference models, this trainer will help you quickly fine-tune styles, characters, and artistic directions to build your own personalized models.
- LoRA fine-tuning tailored for Newbie models.
- Suitable for limited VRAM or rapid experimentation scenarios.
- Achieves significant style or behavior changes with only a small number of additional parameters.
- Supports LoKr-based training to further improve parameter efficiency.
- Reduces storage and loading overhead while maintaining representational power.
- More suitable for users who frequently switch or combine multiple LoRA / LoKr adapters.
The following steps assume a Python environment. It is recommended to use Python 3.10+ and an NVIDIA GPU with CUDA support on Linux or Windows.
git clone https://github.com/NewBieAI-Lab/NewbieLoraTrainer.git
cd NewbieLoraTrainerUsing venv isolates this project’s dependencies from your system Python environment and avoids conflicts between different projects.
# Create a virtual environment
python -m venv venv
# Activate the virtual environment
# Windows
venv\Scripts\activate
# Linux / macOS
source venv/bin/activatePlease visit the official PyTorch website and choose the correct installation command according to your CUDA version and operating system.
For example (only an example, please adjust to your actual setup):
pip install torch torchvisionNote: If you want GPU acceleration, make sure to install a PyTorch build with CUDA support.
To further improve training speed and VRAM efficiency, it is recommended to install Flash-Attention and Triton:
pip install flash-attn
pip install tritonPlease refer to the tutorials linked below or the official documentation of each project if you run into compilation or CUDA-related issues during installation.
After activating the virtual environment and installing PyTorch, install the remaining dependencies required by this project with:
pip install -r requirements.txtOnce these steps are completed, your basic environment is ready and you can start LoRA / LoKr training following the tutorials.
If this is your first time using the trainer, it is strongly recommended to read the following tutorial documents first.
They explain data preparation, configuration files, command-line examples, and more.
-
Chinese Tutorial (recommended for Chinese-speaking users):
https://www.notion.so/Newbie-AI-lora-2b84f7496d81803db524f5fc4a9c94b9?source=copy_link -
English Tutorial (for international / English-speaking users):
https://www.notion.so/Newbie-AI-lora-training-tutorial-English-2c2e4ae984ab8177b312e318827657e6?source=copy_link
The tutorials typically cover:
- Detailed environment and dependency explanations
- How to prepare and tag your training dataset
- Example configuration files and parameter descriptions
- Common error patterns and troubleshooting tips
After completing LoRA training, you can use the provided merge_lora.py script to merge the trained LoRA with a base model.
This produces a standalone merged model that can be used directly in environments without native LoRA support.
(LoKr merging is not supported yet.)
Example command (for illustration only):
python merge_lora.py --base_model /path/to/base/model --lora_path /path/to/trained_lora.safetensors --output_path /path/to/merged_modelPlease adjust paths and arguments in the script or command line according to your actual setup.
If you are using ComfyUI, you can load a trained LoRA directly through the Newbie AI LoRA Loader node, without merging the model beforehand.
Typical workflow:
- Place the trained
.safetensorsLoRA file into ComfyUI’slorasdirectory (or your custom directory). - Add the Newbie AI LoRA Loader node in your ComfyUI workflow.
- Select the corresponding LoRA file in the node.
- Connect it to your Newbie base model inference pipeline and start image generation.
The overall design and implementation of this trainer is heavily inspired by excellent open-source projects in the community, especially:
The newbie trainer borrows ideas from this project in terms of training flow, parameter design, and parts of the code structure.
We would like to express our sincere thanks to kohya-ss and all contributors to sd-scripts.
This project is released under the Apache License 2.0. Under the terms of this license, you are allowed to:
- Freely use, modify, and distribute this project’s code.
- Integrate it into your personal or commercial projects.
For full details, please refer to the LICENSE file in this repository or the official Apache 2.0 license documentation.
If you have extended or modified this project, we kindly encourage you to credit the original source in your documentation and consider contributing improvements back to the community to help grow the Newbie ecosystem.