This platform lets user collect LLM response correction feedback, curate datasets generate JSONL and QLoRA adapters on-demand using GPU jobs, without retraining full models.
-
Updated
Jan 26, 2026 - Python
This platform lets user collect LLM response correction feedback, curate datasets generate JSONL and QLoRA adapters on-demand using GPU jobs, without retraining full models.
SchemaBank: 3x improvement over LoRA via sparse routing as training curriculum. Research code for parameter-efficient fine-tuning.
Complete guide + scripts for training HunyuanVideo 1.5 I2V LoRAs on AMD GPUs (ROCm). Battle-tested on RX 9700 XT.
A tool to analyse images and captions for LoRa datasets
This project focuses on fine-tuning Meta’s LLaMA 2 model to develop a domain-specific medical chatbot capable of understanding and responding to patient and clinician queries with high accuracy. Leveraging parameter-efficient fine-tuning techniques—LoRA and QLoRA the project ensures resource-efficient training while maintaining high performance.
Researching Current Transfer Learning & XAI in Clinical Imaging for classifying Chest X-rays and the integration of Explainable AI (XAI), providing visual evidence (heatmaps) to justify the model's clinical predictions.
Lofi's Lora Data Prep tool with a GUI to make curating your images easy.
REVA4 Research Initiative
Lightweight scientific paper summarizer using LoRA fine-tuning and RAG-based Q&A
[NVIDIA ONLY] Optimized Training script for Ace-Step with low VRAM support for local GPUs. (Requirements 8GB VRAM)
🛠️ Build and manage SDXL containers seamlessly across Linux, macOS, and Windows for efficient machine learning workflows.
An Arabic News Credibility Checking platform that involved finetuning an LLM, building an API, and hosting it in the cloud using Azure
Code for the paper comparing LoRA on Vision Transformer vs. traditional fine-tuning for CNNs on MNIST, Fashion-MNIST, and CIFAR-10.
Chiral Narrative Synthesis workspace for Thinker/Tinker LoRA pipelines, semantic fact-checking, telemetry, and reviewer-ready CNS docs.
Official Code for ReactionTeam: Teaming Experts for Divergent Thinking Beyond Typical Reaction Patterns (In IEEE Conference BigData 2025 Oral)
A container for sdxl
🔍 Monitor security feeds to collect alerts on vulnerabilities and updates, delivering real-time notifications to Discord for quick response.
A CLI tool to train LoRA adapters for text-to-image models using a folder of images with intelligent content-aware cropping
Elixir SDK for the Tinker ML platform—LoRA training, sampling, and service orchestration built on OTP, Finch, and telemetry.
A local-first, self-hosted Asset Management System (AMS) focusing on the relationships between entities (real and fictional, human and non-human) and connected assets.
Add a description, image, and links to the lora-training topic page so that developers can more easily learn about it.
To associate your repository with the lora-training topic, visit your repo's landing page and select "manage topics."