HAProxy wrong configuration during upgrade to 2023.2

Bug #2060823 reported by Przemysław Kuczyński
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla-ansible
New
Undecided
Unassigned

Bug Description

During upgrade on step related to HAProxy containers beeing restarted but old config for grafana, opensearch is still in place

We needed manually change
"dest": "/etc/haproxy/haproxy.pem",
to
"dest": "/etc/haproxy/certificates/haproxy.pem",

Upgrade from 2023.1

[NOTICE] (753) : haproxy version is 2.4.24-0ubuntu0.22.04.1
[NOTICE] (753) : path to executable is /usr/sbin/haproxy
[ALERT] (753) : parsing [/etc/haproxy/services.d/grafana.cfg:7] : 'bind 10.0.xxx.xxx:3000' : unable to stat SSL certificate from file '/etc/haproxy/haproxy-internal.pem' : No such file or directory.
[ALERT] (753) : Error(s) found in configuration file : /etc/haproxy/services.d/grafana.cfg
[ALERT] (753) : parsing [/etc/haproxy/services.d/opensearch-dashboards.cfg:10] : 'bind 10.0.xxx.xxx:5601' : unable to stat SSL certificate from file '/etc/haproxy/haproxy-internal.pem' : No such file or directory.
[ALERT] (753) : Error(s) found in configuration file : /etc/haproxy/services.d/opensearch-dashboards.cfg
[ALERT] (753) : parsing [/etc/haproxy/services.d/opensearch.cfg:8] : 'bind 10.0.xxx.xx:9200' : unable to stat SSL certificate from file '/etc/haproxy/haproxy-internal.pem' : No such file or directory.
[ALERT] (753) : Error(s) found in configuration file : /etc/haproxy/services.d/opensearch.cfg
[WARNING] (753) : parsing [/etc/haproxy/services.d/horizon.cfg:7] : a 'http-request' rule placed after a 'use_backend' rule will still be processed before.
[ALERT] (753) : Fatal errors found in configuration.

Revision history for this message
Michal Arbet (michalarbet) wrote :

Can u share the configuration ? Especially below vars in globals.yml ?

  kolla_enable_tls_internal:
  kolla_enable_tls_external:
  kolla_enable_tls_backend:
  kolla_copy_ca_into_containers:
  enable_letsencrypt:

It's weird that there is old path as I can't see the old path in a code.

Revision history for this message
Przemysław Kuczyński (przemekkuczynski) wrote :

This is global.yml

---
workaround_ansible_issue_8743: yes
kolla_base_distro: "ubuntu"
openstack_release: "2023.2"
kolla_internal_vip_address: "10.xxx.xxx.xxx"
kolla_internal_fqdn: "xxx.xxx.xxx"
kolla_external_vip_address: "{{ kolla_internal_vip_address }}"
kolla_external_fqdn: "xxx.xxx.xxx"
docker_registry: "xxx:4000"
docker_registry_insecure: "yes"
network_address_family: "ipv4"
neutron_plugin_agent: "ovn"
enable_neutron_packet_logging: "yes"
kolla_enable_tls_internal: "yes"
kolla_enable_tls_external: "yes"
kolla_copy_ca_into_containers: "yes"
haproxy_backend_cacert_dir: "/etc/ssl/certs"
openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"
enable_openstack_core: "yes"
enable_mariadb: "no"
use_preconfigured_databases: "no"
database_address: "externaldbserver1.xxxx.xxx"
database_password: xxxx
enable_external_mariadb_load_balancer: "no"
enable_memcached: "yes"
enable_rabbitmq: "no"
rpc_transport_url: xxxx
notify_transport_url: "{{ rpc_transport_url }}"
nova_cell_rpc_transport_url: xxx
nova_cell_notify_transport_url: "{{ nova_cell_rpc_transport_url }}"
enable_barbican: "yes"
enable_blazar: "yes"
enable_ceilometer: "yes"
enable_central_logging: "yes"
enable_cinder: "yes"
enable_fluentd: "yes"
enable_gnocchi: "yes"
enable_gnocchi_statsd: "yes"
enable_grafana: "yes"
enable_masakari: "yes"
enable_neutron_agent_ha: "yes"
enable_neutron_port_forwarding: "yes"
enable_octavia: "no"
enable_prometheus: "yes"
enable_redis: "yes"
enable_skyline: "yes"
ceph_glance_pool_name: "Images"
  ##
  ## Przeniesione do "cinder_ceph_backends"
  ## ceph_cinder_pool_name: "VolumesStandardW1"
ceph_cinder_backup_pool_name: "Backups"
cinder_ceph_backends:
  - name: "rbd-1"
    cluster: "ceph"
    pool: "VolumesStandardW1"
    availability_zone: "W1-az"
    enabled: "{{ cinder_backend_ceph | bool }}"
  - name: "rbd-2"
    cluster: "rbd2"
    pool: "VolumesStandardW2"
    availability_zone: "W2-az"
    enabled: "{{ cinder_backend_ceph | bool }}"
ceph_nova_pool_name: "Vms"
ceph_gnocchi_pool_name: "Gnocchi"
glance_backend_ceph: "yes"
gnocchi_backend_storage: "ceph"
cinder_backend_ceph: "yes"
cinder_backup_driver: "ceph"
nova_backend_ceph: "yes"
nova_compute_virt_type: "kvm"
prometheus_cmdline_extras: "--query.timeout=1m --storage.tsdb.retention.size=1GB"

Revision history for this message
Przemysław Kuczyński (przemekkuczynski) wrote :

Maybe someone else have this issue ?

Revision history for this message
Debasis (debamondal) wrote :

I also have this issue while upgrading to the latest version from 2023.2

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.