Thanks to visit codestin.com
Credit goes to github.com

Skip to content
GitHub Universe 2025
Explore 100+ talks, demos, and workshops at Universe 2025. Choose your favorites.
#

llm-fine-tuning

Here are 33 public repositories matching this topic...

The course teaches how to fine-tune LLMs using Group Relative Policy Optimization (GRPO)—a reinforcement learning method that improves model reasoning with minimal data. Learn RFT concepts, reward design, LLM-as-a-judge evaluation, and deploy jobs on the Predibase platform.

  • Updated Jun 13, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the llm-fine-tuning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-fine-tuning topic, visit your repo's landing page and select "manage topics."

Learn more