neutron-haproxy-ovnmeta containers are not up after compute node restarted

Bug #1862010 reported by Jakub Libosvar
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
Fix Released
Undecided
Jakub Libosvar

Bug Description

After a compute node is powered off and then powered on the neutron-haproxy-ovnmeta are in Created status which cause the metadata service to be unavailable. Wrapper for side-car haproxy container doesn't cleanup the orphaned container because it expects "Exited" status only.

2020-02-04 07:31:41.625 4565 ERROR ovsdbapp.event + nsenter --net=/run/netns/ovnmeta-f1af5172-627c-4e51-b1fd-5f6524e2876c --preserve-credentials -m -t 1 podman run --detach --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/neutron-haproxy-ovnmeta-f1af5172-627c-4e51-b1fd-5f6524e2876c.log -v /var/lib/config-data/puppet-generated/neutron/etc/neutron:/etc/neutron:ro -v /run/netns:/run/netns:shared -v /var/lib/neutron:/var/lib/neutron:z,shared -v /dev/log:/dev/log --net host --pid host --privileged -u root --name neutron-haproxy-ovnmeta-f1af5172-627c-4e51-b1fd-5f6524e2876c undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:20200124.1 /bin/bash -c 'HAPROXY="$(if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then echo "/usr/sbin/haproxy -Ds"; else echo "/usr/sbin/haproxy -Ws"; fi)"; exec $HAPROXY -f /var/lib/neutron/ovn-metadata-proxy/f1af5172-627c-4e51-b1fd-5f6524e2876c.conf'
2020-02-04 07:31:41.625 4565 ERROR ovsdbapp.event Error: error creating container storage: the container name "neutron-haproxy-ovnmeta-f1af5172-627c-4e51-b1fd-5f6524e2876c" is already in use by "d2eaaa321e37a377e6c550204fb7823204f4438b55822b618510323b4f8f726f". You have to remove that container to be able to reuse that name.: that name is already in use

To reproduce this:

Create a VM on compute node - this leads to metadata agent spawning haproxy container
Power off the compute node
Power on the compute node
Start the VM

The haproxy doesn't respawn.

This doesn't happen in master (Ussuri) branch, only in Train and Stein.

Changed in tripleo:
status: New → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to puppet-tripleo (stable/stein)

Fix proposed to branch: stable/stein
Review: https://review.opendev.org/705937

Revision history for this message
Jakub Libosvar (libosvar) wrote :
Changed in tripleo:
assignee: nobody → Jakub Libosvar (libosvar)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to puppet-tripleo (stable/stein)

Reviewed: https://review.opendev.org/705937
Committed: https://git.openstack.org/cgit/openstack/puppet-tripleo/commit/?id=5863f5ff63ee1b95fc3a723d75d7de569358e057
Submitter: Zuul
Branch: stable/stein

commit 5863f5ff63ee1b95fc3a723d75d7de569358e057
Author: Jakub Libosvar <email address hidden>
Date: Tue Feb 4 18:18:58 2020 +0100

    Remove side-car containers in Create status

    The change Ib3c41a8bee349856d21f360595e41a9eafd79323 added a mechanism
    to remove side-car containers before spawning a new one for the same
    network.

    For dnsmasq side-car, it matches on container statuses Exited
    and Created, while all remaining containers match on Exited only.
    However if a node running side-car is shut down ungracefully, containers
    end in Created status and wrapper script won't be able to start the
    containers because they already exist.

    This change goes only to Train because in current master (Ussuri) these
    wrappers were replaced by ansible.

    Closes-bug: #1862010

    Change-Id: I7909cd18c7a123d64d24ebc33167d415a8cfb228
    (cherry picked from commit 378580d3d8b3909c113e8d5c9bcae1fbf3315376)

tags: added: in-stable-stein
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/puppet-tripleo stein-eol

This issue was fixed in the openstack/puppet-tripleo stein-eol release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.