Hi All,
Some test were made to find the point where the memory is allocated:
Just after `config_controller` it's using just a handful of GBs:
controller-0:~$ free -h total used free shared buff/cache available Mem: 93G 3.2G 84G 47M 5.5G 88G Swap: 0B 0B 0B controller-0:~$
Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB:
Commands running in parallel, system host-list:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | disabled | intest | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | disabled | intest | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+
In parallel with free -h:
total used free shared buff/cache available Mem: 93G 4.9G 86G 45M 1.7G 87G Swap: 0B 0B 0B total used free shared buff/cache available Mem: 93G 5.1G 86G 45M 1.9G 86G Swap: 0B 0B 0B total used free shared buff/cache available Mem: 93G 5.1G 85G 45M 1.9G 86G Swap: 0B 0B 0B total used free shared buff/cache available Mem: 93G 71G 19G 45M 1.9G 20G Swap: 0B 0B 0B total used free shared buff/cache available Mem: 93G 71G 19G 46M 1.9G 20G Swap: 0B 0B 0B
This is just with kube-system pods:
NAME READY STATUS RESTARTS AGE calico-kube-controllers-84cdb6bd7c-w75rk 1/1 Running 1 36m calico-node-zp8xv 1/1 Running 1 36m coredns-84bb87857f-lp8sl 1/1 Running 1 36m coredns-84bb87857f-r6mdf 0/1 Pending 0 36m kube-apiserver-controller-0 1/1 Running 1 35m kube-controller-manager-controller-0 1/1 Running 2 35m kube-proxy-w7sfq 1/1 Running 1 36m kube-scheduler-controller-0 1/1 Running 2 35m tiller-deploy-d87d7bd75-hjb7w 1/1 Running 1 36m
Hi All,
Some test were made to find the point where the memory is allocated:
Just after `config_controller` it's using just a handful of GBs:
controller-0:~$ free -h
total used free shared buff/cache available
Mem: 93G 3.2G 84G 47M 5.5G 88G
Swap: 0B 0B 0B
controller-0:~$
Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB:
Commands running in parallel, system host-list:
[wrsroot@ controller- 0 ~(keystone_admin)]$ system host-list ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ controller- 0 ~(keystone_admin)]$ system host-list ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ controller- 0 ~(keystone_admin)]$ system host-list ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ controller- 0 ~(keystone_admin)]$ system host-list ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+
+----+-
| id | hostname | personality | administrative | operational | availability |
+----+-
| 1 | controller-0 | controller | unlocked | disabled | offline |
+----+-
[wrsroot@
+----+-
| id | hostname | personality | administrative | operational | availability |
+----+-
| 1 | controller-0 | controller | unlocked | disabled | intest |
+----+-
[wrsroot@
+----+-
| id | hostname | personality | administrative | operational | availability |
+----+-
| 1 | controller-0 | controller | unlocked | disabled | intest |
+----+-
[wrsroot@
+----+-
| id | hostname | personality | administrative | operational | availability |
+----+-
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+-
In parallel with free -h:
total used free shared buff/cache available
Mem: 93G 4.9G 86G 45M 1.7G 87G
Swap: 0B 0B 0B
total used free shared buff/cache available
Mem: 93G 5.1G 86G 45M 1.9G 86G
Swap: 0B 0B 0B
total used free shared buff/cache available
Mem: 93G 5.1G 85G 45M 1.9G 86G
Swap: 0B 0B 0B
total used free shared buff/cache available
Mem: 93G 71G 19G 45M 1.9G 20G
Swap: 0B 0B 0B
total used free shared buff/cache available
Mem: 93G 71G 19G 46M 1.9G 20G
Swap: 0B 0B 0B
This is just with kube-system pods:
NAME READY STATUS RESTARTS AGE kube-controller s-84cdb6bd7c- w75rk 1/1 Running 1 36m 84bb87857f- lp8sl 1/1 Running 1 36m 84bb87857f- r6mdf 0/1 Pending 0 36m controller- 0 1/1 Running 1 35m -manager- controller- 0 1/1 Running 2 35m controller- 0 1/1 Running 2 35m deploy- d87d7bd75- hjb7w 1/1 Running 1 36m
calico-
calico-node-zp8xv 1/1 Running 1 36m
coredns-
coredns-
kube-apiserver-
kube-controller
kube-proxy-w7sfq 1/1 Running 1 36m
kube-scheduler-
tiller-