Thanks to visit codestin.com
Credit goes to github.com

Skip to content

🍞 Web-based platform for deploying and managing LLM inference workloads on Kubernetes with extensible frameworks.

Notifications You must be signed in to change notification settings

sozercan/kube-foundry

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

37 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

KubeFoundry

A web-based platform for deploying and managing large language models on Kubernetes with support for multiple inference providers.

Features

  • �️ Web UI: Modern interface for all deployment and management tasks
  • πŸ“¦ Model Catalog: Browse curated models or search the entire HuggingFace Hub
  • πŸ” Smart Filtering: Automatically filters models by architecture compatibility
  • πŸ“Š GPU Capacity Warnings: Visual indicators showing if models fit your cluster's GPU memory
  • ⚑ Autoscaler Integration: Detects cluster autoscaling and provides capacity guidance
  • πŸš€ One-Click Deploy: Configure and deploy models without writing YAML
  • πŸ“ˆ Live Dashboard: Monitor deployments with auto-refresh and status tracking
  • πŸ”Œ Multi-Provider Support: Extensible architecture supporting multiple inference runtimes
  • πŸ”§ Multiple Engines: vLLM, SGLang, and TensorRT-LLM (via NVIDIA Dynamo)
  • πŸ“₯ Installation Wizard: Install providers via Helm directly from the UI
  • 🎨 Dark Theme: Modern dark UI with provider-specific accents

Supported Providers

Provider Status Description
NVIDIA Dynamo βœ… Available GPU-accelerated inference with aggregated or disaggregated serving
KubeRay βœ… Available Ray-based distributed inference

Prerequisites

  • Kubernetes cluster with kubectl configured
  • helm CLI installed
  • GPU nodes with NVIDIA drivers (for GPU-accelerated inference)
  • HuggingFace account (for accessing gated models like Llama)

Quick Start

Option A: Run Locally

Download the latest release for your platform and run:

./kubefoundry

Open the web UI at http://localhost:3001

Requires: kubectl configured with cluster access, helm CLI installed

Option B: Deploy to Kubernetes

kubectl apply -f https://raw.githubusercontent.com/sozercan/kube-foundry/main/deploy/kubernetes/kubefoundry.yaml

# Access via port-forward
kubectl port-forward -n kubefoundry-system svc/kubefoundry 3001:80

Open the web UI at http://localhost:3001

See Kubernetes Deployment for configuration options.


1. Install a Provider

Navigate to the Installation page and click Install next to your preferred provider. The UI will guide you through the Helm installation process with real-time status updates.

2. Connect HuggingFace Account

Go to Settings β†’ HuggingFace and click "Sign in with Hugging Face" to connect your account via OAuth. Your token will be automatically distributed to all required namespaces.

Note: A HuggingFace token is required to access gated models like Llama.

3. Deploy a Model

  1. Navigate to the Models page
  2. Browse the curated catalog or Search HuggingFace for any compatible model
  3. Review GPU memory estimates and fit indicators (βœ“ fits, ⚠ tight, βœ— exceeds)
  4. Click Deploy on your chosen model
  5. Select Runtime: Choose between NVIDIA Dynamo or KubeRay based on installed runtimes
  6. Configure deployment options (engine, replicas, tensor parallelism, etc.)
  7. Click Create Deployment to launch

Note: Each deployment can use a different runtime. The deployment list shows which runtime each deployment is using.

4. Monitor Your Deployment

Head to the Deployments page to:

  • View real-time status of all deployments
  • See pod readiness and health checks
  • Access logs and deployment details
  • Scale or delete deployments

5. Access Your Model

Once status shows Running, your model exposes an OpenAI-compatible API. Use kubectl port-forward to access it locally:

# Port-forward to the service (check Deployments page for exact service name)
kubectl port-forward svc/<deployment-name> 8000:8000 -n <namespace>

# List available models
curl http://localhost:8000/v1/models

# Test with a chat completion
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "<model-name>", "messages": [{"role": "user", "content": "Hello!"}]}'

Supported Models

KubeFoundry supports any HuggingFace model with a compatible architecture. Browse the curated catalog for tested models, or search HuggingFace Hub for thousands more.

Supported Architectures

When searching HuggingFace, models are filtered by architecture compatibility:

Engine Supported Architectures
vLLM LlamaForCausalLM, MistralForCausalLM, Qwen2ForCausalLM, GPT2LMHeadModel, and 40+ more
SGLang LlamaForCausalLM, MistralForCausalLM, Qwen2ForCausalLM, and 20+ more
TensorRT-LLM LlamaForCausalLM, GPTForCausalLM, MistralForCausalLM, and 15+ more

Authentication (Optional)

KubeFoundry supports optional authentication using your existing kubeconfig OIDC credentials.

To enable, start the server with:

AUTH_ENABLED=true ./kubefoundry

Then use the CLI to login:

kubefoundry login                              # Uses current kubeconfig context
kubefoundry login --server https://example.com # Specify server URL
kubefoundry login --context my-cluster         # Use specific context

The login command extracts your OIDC token and opens the browser automatically.

Documentation

Contributing

We welcome contributions! Please see CONTRIBUTING.md for development setup and guidelines.

About

🍞 Web-based platform for deploying and managing LLM inference workloads on Kubernetes with extensible frameworks.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages