This project leverages FLAN-T5 from Hugging Face to perform dialogue summarization, fine-tuning with ROUGE, and detoxifying summaries using PPO and PEFT.
-
Updated
Mar 20, 2024 - Jupyter Notebook
This project leverages FLAN-T5 from Hugging Face to perform dialogue summarization, fine-tuning with ROUGE, and detoxifying summaries using PPO and PEFT.
This is the repo for prompt tuning a language model to improve the given prompt (vague).
Develop a chatbot that can effectively adapt to context and topic shifts in a conversation, leveraging the Stanford Question Answering Dataset to provide informed and relevant responses, and thereby increasing user satisfaction and engagement.
Implementation of Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning of GPT-2 on the SQuAD dataset for question answering, exploring training efficiency, loss masking, and performance metrics like F1 and Exact Match. Final Course project for Deep Learning at University of Kerman, Spring 2025.
⭐️⭐️⭐️LLMs RoadMap,帮助各位从transformers仓库视角了解NLP传统任务,模型高效微调,低精度微调,分布式模型训练等工程内容
parameter-efficient finetuning method for dynamic faical expression recongition (Electronics)
Lightweight reasoning-capable LLM built on Qwen3-4B using LoRA and 4-bit inference
Lightweight Python toolkit for fine-tuning image datasets with Parameter Efficient Fine Tuning (PEFT) and ViTs
AI Assistant for Customer Support
Language Fusion for Parameter-Efficient Cross-lingual Transfer
LoRA + QLoRA fine‑tuning toolkit optimized for Intel Arc Battlemage GPUs
Domain-Specific Sentiment Analysis Model fine-tuned on FinBERT using PEFT (LoRA) to classify financial texts into positive, negative, and neutral sentiment. Achieves high accuracy on domain-specific data with minimal computational cost by leveraging parameter-efficient fine-tuning
KoRA is a novel PEFT method that introduces inter-adapter communication via a CompositionBlock inspired by the Kolmogorov–Arnold Representation Theorem. It composes query, key, and value adapters into a unified representation — achieving robust generalization and cross-domain transfer with minimal parameter overhead.
This project aims to fine-tune an open-source Large Language Model (LLM) to build an enterprise-oriented email response drafting assistant for customer support teams.
This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
FineTuning LLMs on conversational medical dataset.
A LLM(llama) finetuned for work well with mental health assistance
Add a description, image, and links to the peft topic page so that developers can more easily learn about it.
To associate your repository with the peft topic, visit your repo's landing page and select "manage topics."