Scenario 010 failing in load_balancer tests

Bug #1979546 reported by Arx Cruz
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
Triaged
Critical
Unassigned

Bug Description

octavia tests are returning with error status for the server:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/octavia_tempest_plugin/tests/scenario/v2/test_load_balancer.py", line 89, in test_load_balancer_ipv4_CRUD
    self._test_load_balancer_CRUD(4)
  File "/usr/lib/python3.6/site-packages/octavia_tempest_plugin/tests/scenario/v2/test_load_balancer.py", line 128, in _test_load_balancer_CRUD
    CONF.load_balancer.lb_build_timeout)
  File "/usr/lib/python3.6/site-packages/octavia_tempest_plugin/tests/waiters.py", line 80, in wait_for_status
    raise exceptions.UnexpectedResponseCode(message)
tempest.lib.exceptions.UnexpectedResponseCode: Unexpected response code received
Details: (LoadBalancerScenarioTest:test_load_balancer_ipv4_CRUD) show_loadbalancer provisioning_status updated to an invalid state of ERROR

Example: https://sf.hosted.upshift.rdu2.redhat.com/logs/14/416314/1/check/periodic-tripleo-ci-centos-8-scenario010-kvm-internal-standalone-train/034f211/logs/undercloud/var/log/tempest/stestr_results.html

Talking with gthiemonge on irc:

[16:53:24] <gthiemonge> arxcruz|rover: hmmm there are timeouts while waiting for a VM to be active: ERROR oslo_messaging.rpc.server octavia.common.exceptions.ComputeWaitTimeoutException: Waiting for compute id c3ba8aa3-eff4-44ae-98a9-fa5f720298c6 to go active timeout
[16:54:29] <gthiemonge> arxcruz|rover: maybe it's related to https://sf.hosted.upshift.rdu2.redhat.com/logs/14/416314/1/check/periodic-tripleo-ci-centos-8-scenario010-kvm-internal-standalone-train/034f211/logs/undercloud/var/log/containers/libvirt/qemu/instance-00000001.log

instance log:

2022-06-22 11:57:15.415+0000: starting up libvirt version: 8.0.0, package: 6.module_el8.7.0+1140+ff0772f9 (CentOS Buildsys <email address hidden>, 2022-05-03-16:43:36, ), qemu version: 6.2.0qemu-kvm-6.2.0-12.module_el8.7.0+1140+ff0772f9, kernel: 4.18.0-394.el8.x86_64, hostname: standalone.localdomain
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
HOME=/var/lib/libvirt/qemu/domain-2-instance-00000001 \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-2-instance-00000001/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-2-instance-00000001/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-2-instance-00000001/.config \
/usr/libexec/qemu-kvm \
-name guest=instance-00000001,debug-threads=on \
-S \
-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-instance-00000001/master-key.aes"}' \
-machine pc-i440fx-rhel7.6.0,usb=off,dump-guest-core=off,memory-backend=pc.ram \
-accel kvm \
-cpu host,migratable=on \
-m 1024 \
-object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":1073741824}' \
-overcommit mem-lock=off \
-smp 1,sockets=1,dies=1,cores=1,threads=1 \
-uuid 7cce6d14-77c4-499f-9cea-27390e41f677 \
-smbios 'type=1,manufacturer=RDO,product=OpenStack Compute,version=20.6.2-0.20220615154551.e4f8dec.el8,serial=7cce6d14-77c4-499f-9cea-27390e41f677,uuid=7cce6d14-77c4-499f-9cea-27390e41f677,family=Virtual Machine' \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=38,server=on,wait=off \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-boot strict=on \
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
-object '{"qom-type":"secret","id":"libvirt-2-storage-auth-secret0","data":"nth5AOadju6wxxz0u7HLZyBQeBW/edy4ezH8rlvXbDM=","keyid":"masterKey0","iv":"UZ8yKY2BUy0iB8l+YZ9GNw==","format":"base64"}' \
-blockdev '{"driver":"rbd","pool":"vms","image":"7cce6d14-77c4-499f-9cea-27390e41f677_disk","server":[{"host":"192.168.24.1","port":"3300"}],"user":"openstack","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-auth-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-2-format,id=virtio-disk0,bootindex=1,write-cache=on \
-object '{"qom-type":"secret","id":"libvirt-1-storage-auth-secret0","data":"mld+trPp012U7twJFT98//ytn4mcl6WEElxsm+h1h1A=","keyid":"masterKey0","iv":"A9uAdG50hQ0qUUbsYHecpQ==","format":"base64"}' \
-blockdev '{"driver":"rbd","pool":"vms","image":"7cce6d14-77c4-499f-9cea-27390e41f677_disk.config","server":[{"host":"192.168.24.1","port":"3300"}],"user":"openstack","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-auth-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":true,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
-netdev tap,fd=40,id=hostnet0,vhost=on,vhostfd=43 \
-device virtio-net-pci,rx_queue_size=512,host_mtu=1442,netdev=hostnet0,id=net0,mac=fa:16:3e:d7:d4:f2,bus=pci.0,addr=0x3 \
-add-fd set=3,fd=39 \
-chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on \
-device isa-serial,chardev=charserial0,id=serial0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-audiodev '{"id":"audio1","driver":"none"}' \
-vnc 192.168.24.1:1,audiodev=audio1 \
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/2 (label charserial0)
2022-06-22T11:57:15.765015Z qemu-kvm: -device cirrus-vga,id=video0,bus=pci.0,addr=0x2: warning: 'cirrus-vga' is deprecated, please use a different VGA card instead
KVM: entry failed, hardware error 0x8
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00080660
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT= 00000000 0000ffff
IDT= 00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=04 66 41 eb f1 66 83 c9 ff 66 89 c8 66 5b 66 5e 66 5f 66 c3 <ea> 5b e0 00 f0 30 36 2f 32 33 2f 39 39 00 fc 00 ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??

Arx Cruz (arxcruz)
tags: added: alert
Revision history for this message
Arx Cruz (arxcruz) wrote :

Talking with gthiemonge on irc:

[16:53:24] <gthiemonge> arxcruz|rover: hmmm there are timeouts while waiting for a VM to be active: ERROR oslo_messaging.rpc.server octavia.common.exceptions.ComputeWaitTimeoutException: Waiting for compute id c3ba8aa3-eff4-44ae-98a9-fa5f720298c6 to go active timeout
[16:54:29] <gthiemonge> arxcruz|rover: maybe it's related to https://sf.hosted.upshift.rdu2.redhat.com/logs/14/416314/1/check/periodic-tripleo-ci-centos-8-scenario010-kvm-internal-standalone-train/034f211/logs/undercloud/var/log/containers/libvirt/qemu/instance-00000001.log

Revision history for this message
Ronelle Landy (rlandy) wrote :

This is not consistent - passed today

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.