We’ve tried to reproduce the bug, however we didn’t get the same phenomenon when using the .iso file which come out in 6.7;
In our test, the ovs and libvirtd pod seems work properly when reboot the controller-0. Here is the log information;
controller-0:~$ openstack hypervisor list
+----+---------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+---------------+-------+
| 4 | controller-1 | QEMU | 192.168.206.4 | up |
| 6 | controller-0 | QEMU | 192.168.206.3 | up |
+----+---------------------+-----------------+---------------+-------+
We’ve tried to reproduce the bug, however we didn’t get the same phenomenon when using the .iso file which come out in 6.7;
In our test, the ovs and libvirtd pod seems work properly when reboot the controller-0. Here is the log information;
controller-0:~$ openstack hypervisor list ------- ------- ------+ ------- ------- ---+--- ------- -----+- ------+ ------- ------- ------+ ------- ------- ---+--- ------- -----+- ------+ ------- ------- ------+ ------- ------- ---+--- ------- -----+- ------+
+----+-
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+-
| 4 | controller-1 | QEMU | 192.168.206.4 | up |
| 6 | controller-0 | QEMU | 192.168.206.3 | up |
+----+-
controller- 0:/home/ wrsroot# kubectl logs -n openstack libvirt- libvirt- default- 69mlr libvirtd. pid cgroup/ hugetlb ']' /osh-libvirt qemu-kvm qemu-kvm: No such file or directory cgroup/ hugetlb ']' cgroup/ hugetlb/ k8s-infra/ hugetlb. 1GB.limit_ in_bytes /sys/fs/ cgroup/ hugetlb/ k8s-infra/ hugetlb. 2MB.limit_ in_bytes '/sys/fs/ cgroup/ hugetlb/ k8s-infra/ hugetlb. 1GB.limit_ in_bytes cgroup/ hugetlb/ k8s-infra/ hugetlb. 2MB.limit_ in_bytes' ){print $3}' /proc/self/cgroup kubepods/ besteffort/ pod23696212- 9be3-11e9- b11d-3cfdfed210 24/ecce25603440 da89fe3b6d5539f 051e6b47943ac19 6f73e9afdc694a1 113f38c cgroup/ hugetlb/ k8s-infra/ hugetlb. 1GB.limit_ in_bytes /sys/fs/ cgroup/ hugetlb/ /k8s-infra/ kubepods/ besteffort/ pod23696212- 9be3-11e9- b11d-3cfdfed210 24/hugetlb. 1GB.limit_ in_bytes cgroup/ hugetlb/ /k8s-infra/ kubepods/ besteffort/ pod23696212- 9be3-11e9- b11d-3cfdfed210 24/hugetlb. 1GB.limit_ in_bytes ']' cgroup/ hugetlb/ k8s-infra/ hugetlb. 1GB.limit_ in_bytes ){print $3}' /proc/self/cgroup kubepods/ besteffort/ pod23696212- 9be3-11e9- b11d-3cfdfed210 24/ecce25603440 da89fe3b6d5539f 051e6b47943ac19 6f73e9afdc694a1 113f38c cgroup/ hugetlb/ k8s-infra/ hugetlb. 2MB.limit_ in_bytes /sys/fs/ cgroup/ hugetlb/ /k8s-infra/ kubepods/ besteffort/ pod23696212- 9be3-11e9- b11d-3cfdfed210 24/hugetlb. 2MB.limit_ in_bytes cgroup/ hugetlb/ /k8s-infra/ kubepods/ besteffort/ pod23696212- 9be3-11e9- b11d-3cfdfed210 24/hugetlb. 2MB.limit_ in_bytes ']' cgroup/ hugetlb/ k8s-infra/ hugetlb. 2MB.limit_ in_bytes mm/hugepages/ hugepages- 2048kB/ free_hugepages pages=78225 /osh-libvirt systemd-run --scope --slice=system libvirtd --listen
'[' -n '' ']'
+ rm -f /var/run/
+ [[ -c /dev/kvm ]]
+ chmod 660 /dev/kvm
+ chown root:kvm /dev/kvm
+ CGROUPS=
+ for CGROUP in cpu rdma hugetlb
+ '[' -d /sys/fs/cgroup/cpu ']'
+ CGROUPS+=cpu,
+ for CGROUP in cpu rdma hugetlb
+ '[' -d /sys/fs/cgroup/rdma ']'
+ for CGROUP in cpu rdma hugetlb
+ '[' -d /sys/fs/
+ CGROUPS+=hugetlb,
+ cgcreate -g cpu,hugetlb:
++ cat /proc/meminfo
++ grep HugePages_Total
++ tr -cd '[:digit:]'
+ hp_count=78225
+ '[' 078225 -gt 0 ']'
+ echo 'INFO: Detected hugepage count of '\''78225'\''. Enabling hugepage settings for libvirt/qemu.'
INFO: Detected hugepage count of '78225'. Enabling hugepage settings for libvirt/qemu.
++ grep KVM_HUGEPAGES=0 /etc/default/
grep: /etc/default/
+ '[' -n '' ']'
+ echo KVM_HUGEPAGES=1
+ '[' '!' -d /dev/hugepages ']'
+ '[' -d /sys/fs/
++ ls /sys/fs/
+ limits=
/sys/fs/
+ for limit in '$limits'
+++ awk -F: '($2~/hugetlb/
++ dirname /k8s-infra/
++ basename /sys/fs/
+ target=
+ '[' '!' -f /sys/fs/
++ cat /sys/fs/
+ echo 9223372036854771712
+ for limit in '$limits'
+++ awk -F: '($2~/hugetlb/
++ dirname /k8s-infra/
++ basename /sys/fs/
+ target=
+ '[' '!' -f /sys/fs/
++ cat /sys/fs/
+ echo 9223372036854771712
++ cat /proc/meminfo
++ grep Hugepagesize
++ tr -cd '[:digit:]'
+ default_hp_kb=2048
++ cat /sys/kernel/
++ tr -cd '[:digit:]'
+ num_free_
+ echo 'INFO: '\''78225'\'' free hugepages of size 2048kB'
+ '[' 078225 -gt 0 ']'
INFO: '78225' free hugepages of size 2048kB
+ fallocate -o0 -l 2048 /dev/hugepages/foo
+ rm /dev/hugepages/foo
+ '[' -n '' ']'
+ exec cgexec -g cpu,hugetlb:
Running scope as unit run-715348.scope.