-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Closed
Labels
good first issueDenotes an issue ready for a new contributor, according to the "help wanted" guidelines.Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.
Description
CRI-O: 1.17.2
K8s: 1.16.8
Cgroup driver: systemd
We found what kubectl top pods --containers return same cpu/memory usage for all containers in POD, after i check that by running crictl stats for 2 containers in same pod
root@K8S-N1:~# crictl ps | grep f432ef16bf3ca
538fbb2396de1 013718a8b3d932355c501429b8e58d0e27597070371ca5a0308ca1af24af0f01 About an hour ago Running nginx 0 f432ef16bf3ca
168141c76d71c 9f1e244751343bbb999ac715eea9c7cd946f532b19e45097c4a86951f4757cff About an hour ago Running kube-proxy 0 f432ef16bf3ca
root@K8S-N1:~# crictl stats 538fbb2396de1
CONTAINER CPU % MEM DISK INODES
538fbb2396de1 5.49 30.65MB 8.194kB 3
root@K8S-N1:~# crictl stats 168141c76d71c
CONTAINER CPU % MEM DISK INODES
168141c76d71c 0.15 30.58MB 32.77kB 10
root@K8S-N1:~# crictl stats --pod f432ef16bf3ca
CONTAINER CPU % MEM DISK INODES
So that's looks like CRI-O internal error with collecting/store metrics
Thanks
Metadata
Metadata
Assignees
Labels
good first issueDenotes an issue ready for a new contributor, according to the "help wanted" guidelines.Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.