Highlights
- Pro
Stars
LLM-PowerHouse: Unleash LLMs' potential through curated tutorials, best practices, and ready-to-use code for custom training and inferencing.
A statistical toolkit for scientific discovery using machine learning
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Script to get your site indexed on Google in less than 48 hours
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
Leveraging BERT and c-TF-IDF to create easily interpretable topics.
Seq2Seq project centered on formality transfer
Code for "Image Generation from Scene Graphs", Johnson et al, CVPR 2018
Implementation of ECCV 2020 paper: "Controllable Image Synthesis via SegVAE". Project page: https://yccyenchicheng.github.io/SegVAE/. Paper: https://arxiv.org/abs/2007.08397.
panlybero / gae-dgl
Forked from shionhonda/gae-dglReimplementation of Graph Autoencoder by Kipf & Welling with DGL.