k0rdent Cluster Manager is part of k0rdent which is focused on delivering a open source approach to providing an enterprise grade multi-cluster kubernetes management solution based entirely on standard open source tooling that works across private or public clouds.
We like to say that Project 0x2A (42) is the answer to life, the universe, and everything ... Or, at least, the Kubernetes sprawl we find ourselves faced with in real life!
Detailed documentation is available in k0rdent Docs
helm install kcm oci://ghcr.io/k0rdent/kcm/charts/kcm --version 0.1.0 -n kcm-system --create-namespaceThen follow the Deploy a cluster deployment guide to create a cluster deployment.
Note
The KCM installation using Kubernetes manifests does not allow customization of the deployment. To apply a custom KCM configuration, install KCM using the Helm chart.
See Install KCM for development purposes.
KCM requires the following:
- Existing management cluster (minimum required kubernetes version 1.28.0).
kubectlCLI installed locally.
Optionally, the following CLIs may be helpful:
helm(required only when installing KCM usinghelm).clusterctl(to handle the lifecycle of the cluster deployments).
Full details on the provider configuration can be found in the k0rdent Docs, see Documentation
export KUBECONFIG=<path-to-management-kubeconfig>
helm install kcm oci://ghcr.io/k0rdent/kcm/charts/kcm --version <kcm-version> -n kcm-system --create-namespace
By default, kcm is being deployed with the following configuration:
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: Management
metadata:
name: kcm
spec:
providers:
- name: k0smotron
- name: cluster-api-provider-aws
- name: cluster-api-provider-azure
- name: cluster-api-provider-vsphere
- name: projectsveltos
release: kcm-0-1-0There are two options to override the default management configuration of KCM:
-
Update the
Managementobject after the KCM installation usingkubectl:kubectl --kubeconfig <path-to-management-kubeconfig> edit management -
Deploy KCM skipping the default
Managementobject creation and provide your ownManagementconfiguration:-
Create
management.yamlfile and configure core components and providers. See Management API. -
Specify
--create-management=falsecontroller argument and install KCM:
If installing using
helmadd the following parameter to thehelm installcommand:--set="controller.createManagement=false"- Create
kcmManagementobject after KCM installation:
kubectl --kubeconfig <path-to-management-kubeconfig> create -f management.yaml -
To create a ClusterDeployment:
-
Create
Credentialobject with all credentials required.See Credential system docs for more information regarding this object.
-
Select the
ClusterTemplateyou want to use for the deployment. To list all available templates, run:
export KUBECONFIG=<path-to-management-kubeconfig>
kubectl get clustertemplate -n kcm-systemIf you want to deploy hosted control plane template, make sure to check additional notes on Hosted control plane in k0rdent Docs, see Documentation.
- Create the file with the
ClusterDeploymentconfiguration:
Note
Substitute the parameters enclosed in angle brackets with the corresponding
values. Enable the dryRun flag if required.
For details, see Dryrun.
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: ClusterDeployment
metadata:
name: <cluster-name>
namespace: <cluster-namespace>
spec:
template: <template-name>
credential: <credential-name>
dryRun: <true/false>
config:
<cluster-configuration>- Create the
ClusterDeploymentobject:
kubectl create -f clusterdeployment.yaml
- Check the status of the newly created
ClusterDeploymentobject:
kubectl -n <clusterdeployment-namespace> get ClusterDeployment <clusterdeployment-name> -o=yaml
- Wait for infrastructure to be provisioned and the cluster to be deployed (the
provisioning starts only when
spec.dryRunis disabled):
kubectl -n <clusterdeployment-namespace> get cluster <clusterdeployment-name> -o=yamlNote
You may also watch the process with the clusterctl describe command
(requires the clusterctl CLI to be installed): clusterctl describe cluster <clusterdeployment-name> -n <clusterdeployment-namespace> --show-conditions all
- Retrieve the
kubeconfigof your cluster deployment:
kubectl get secret -n kcm-system <clusterdeployment-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig
KCM ClusterDeployment supports two modes: with and without (default) dryRun.
If no configuration (spec.config) provided, the ClusterDeployment object will
be populated with defaults (default configuration can be found in the
corresponding Template status) and automatically marked as dryRun.
Here is an example of the ClusterDeployment object with default configuration:
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: ClusterDeployment
metadata:
name: <cluster-name>
namespace: <cluster-namespace>
spec:
config:
clusterNetwork:
pods:
cidrBlocks:
- 10.244.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlane:
iamInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
instanceType: ""
controlPlaneNumber: 3
k0s:
version: v1.27.2+k0s.0
publicIP: false
region: ""
sshKeyName: ""
worker:
iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType: ""
workersNumber: 2
template: aws-standalone-cp-0-1-0
credential: aws-credential
dryRun: trueAfter you adjust your configuration and ensure that it passes validation
(TemplateReady condition from status.conditions), remove the spec.dryRun
flag to proceed with the deployment.
Here is an example of a ClusterDeployment object that passed the validation:
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: ClusterDeployment
metadata:
name: aws-standalone
namespace: kcm-system
spec:
template: aws-standalone-cp-0-1-0
credential: aws-credential
config:
region: us-east-2
publicIP: true
controlPlaneNumber: 1
workersNumber: 1
controlPlane:
instanceType: t3.small
worker:
instanceType: t3.small
status:
conditions:
- lastTransitionTime: "2024-07-22T09:25:49Z"
message: Template is valid
reason: Succeeded
status: "True"
type: TemplateReady
- lastTransitionTime: "2024-07-22T09:25:49Z"
message: Helm chart is valid
reason: Succeeded
status: "True"
type: HelmChartReady
- lastTransitionTime: "2024-07-22T09:25:49Z"
message: ClusterDeployment is ready
reason: Succeeded
status: "True"
type: Ready
observedGeneration: 1- Remove the Management object:
kubectl delete management.k0rdent kcmNote
Make sure you have no KCM ClusterDeployment objects left in the cluster prior to Management deletion
- Remove the
kcmHelm release:
helm uninstall kcm -n kcm-system- Remove the
kcm-systemnamespace:
kubectl delete ns kcm-system