Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Failed to start ContainerManager failed to initialise top level QOS containers #43856

@zetaab

Description

@zetaab

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-1+c0b74ebf3ce26e-dirty", GitCommit:"c0b74ebf3ce26e46b5397ffb5ce71cd02f951130", GitTreeState:"dirty", BuildDate:"2017-03-30T08:19:21Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: openstack
  • OS (e.g. from /etc/os-release): NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
  • Kernel (e.g. uname -a): Linux kube-dev-master-2-0 3.10.0-514.10.2.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: ansible
  • Others:

What happened: When creating new cluster with kubernetes 1.6.0 always two of three masters are going to notready state.

kubectl get nodes
NAME STATUS AGE VERSION
kube-dev-master-1-0 Ready 2m v1.6.0
kube-dev-master-1-1 NotReady 2m v1.6.0
kube-dev-master-2-0 NotReady 2m v1.6.0
kube-dev-node-1-0 Ready 1m v1.6.0
kube-dev-node-2-0 Ready 1m v1.6.0

log files from kubelet is spamming:
Mar 30 15:20:44 centos7 kubelet: I0330 12:20:44.297149 8724 kubelet.go:1752] skipping pod synchronization - [Failed to start ContainerManager failed to initialise top level QOS containers: failed to create top level Burstable QOS cgroup : Unit kubepods-burstable.slice already exists.]

What you expected to happen: I except that masters should join to cluster

How to reproduce it (as minimally and precisely as possible): kubelet args

ExecStart=/usr/bin/kubelet
--kubeconfig=/tmp/kubeconfig
--require-kubeconfig
--register-node=true
--hostname-override=kube-dev-master-2-0
--allow-privileged=true
--cgroup-driver=systemd
--cluster-dns=10.254.0.253
--cluster-domain=cluster.local
--pod-manifest-path=/etc/kubernetes/manifests
--v=4
--cloud-provider=openstack
--cloud-config=/etc/kubernetes/cloud-config

After restart of machines everything is working normally, but restart of machine after installation was not needed before 1.6.0

Metadata

Metadata

Labels

kind/bugCategorizes issue or PR as related to a bug.sig/nodeCategorizes an issue or PR as relevant to SIG Node.

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions