When overcloud is deployed with ceph-rgw ansible template the public swift endpoint is not accessibe due to what looks like the missing iptables rule on the controller node.
I have deployed the overcloud using:
openstack overcloud deploy \
--deployed-server \
--disable-validations \
--networks-file /vagrant/data/osconfigs/network_data_osdevlocal.yaml \
--roles-file /vagrant/data/osconfigs/roles_data_osdevlocal.yaml \
--templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \
-e ~/overcloud-baremetal-deployed.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /vagrant/data/osconfigs/net_bond_with_vlans_osdevlocal.yaml \
-e /vagrant/data/osconfigs/network_isolation_osdevlocal.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml \
-e /vagrant/data/osconfigs/ssl/enable_tls_osdevlocal.yaml \
-e /vagrant/data/osconfigs/ssl/inject_trust_anchor_osdevlocal.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
-e /vagrant/data/osconfigs/ceph-config-hci.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml
After the successful deployment, I cannot connect the public swift endpoint:
openstack endpoint list | grep swift
| 8098eada90714d59a7466704fd95b4b7 | regionOne | swift | object-store | True | internal | http://192.168.3.89:8080/swift/v1/AUTH_%(project_id)s |
| 8d79c0b295064e27bd720e50c06f53bd | regionOne | swift | object-store | True | public | https://10.181.11.4:13808/swift/v1/AUTH_%(project_id)s |
| dce721014c38432f93724a98673f6bff | regionOne | swift | object-store | True | admin | http://192.168.3.89:8080/swift/v1/AUTH_%(project_id)s |
telnet 10.181.11.4 13808
Trying 10.181.11.4...
It looks like this is due to the missing iptables rule on the controller node:
[heat-admin@osd-control0 ~]$ sudo iptables -L | grep 13808
[heat-admin@osd-control0 ~]$
Manually inserting the rule fixes the problem. I have noticed the issue whilst testing s3 functionality.
I'm using Victoria release with the following pacakges on the undercloud:
[vagrant@undercloud ~]$ rpm -qa | grep tripleo
openstack-tripleo-common-13.0.1-0.20201110120403.0c781f8.el8.noarch
ansible-tripleo-ipa-0.2.1-0.20201102132151.c77c8d3.el8.noarch
openstack-tripleo-validations-13.0.1-0.20201103050752.da5d5a6.el8.noarch
python3-tripleo-repos-0.1.1-0.20201103094904.3820739.el8.noarch
openstack-tripleo-common-containers-13.0.1-0.20201110120403.0c781f8.el8.noarch
ansible-tripleo-ipsec-9.3.0-0.20201102122836.0c8693c.el8.noarch
openstack-tripleo-heat-templates-13.0.1-0.20201107080412.2363387.el8.noarch
python3-tripleoclient-14.0.1-0.20201109223918.191b80c.el8.noarch
openstack-tripleo-puppet-elements-13.0.1-0.20201107030509.0c080b2.el8.noarch
tripleo-ansible-2.0.1-0.20201109190405.c5cd09e.el8.noarch
puppet-tripleo-13.4.1-0.20201110122515.6176bd3.el8.noarch
openstack-tripleo-image-elements-12.1.0-0.20201102143922.66d1d0b.el8.noarch
ansible-role-tripleo-modify-image-1.2.1-0.20201102130312.9961cf8.el8.noarch
python3-tripleo-common-13.0.1-0.20201110120403.0c781f8.el8.noarch
[vagrant@undercloud ~]$ rpm -qa | grep ceph
ceph-ansible-4.0.25-1.el8.noarch
puppet-ceph-3.1.2-0.20200929061527.105b71e.el8.noarch