Thanks to visit codestin.com
Credit goes to github.com

Skip to content

dallasn/superdome

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Superdome

Superdome: Superposition Density-Optimized Mixture of Experts

A framework for model-to-model knowledge compression

SuperDOME is a large language model architecture that combines orthogonal LoRA superposition with a high-capacity mixture-of-experts framework to bridge the gap between small and large models. By extracting and compressing hundreds of LoRA adapters from teacher-student weight differences into shared supertensors, SuperDOME allows dynamic expert selection at inference while keeping VRAM use low. The approach aims to recover much of a larger 70B model’s capability on a 7B base with minimal VRAM increase.

About

Superposition Density Optimized Mixture of Experts LLM Architecture

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages