Custom roles nodes don't get iptables rules configured

Bug #1630761 reported by Marius Cornea
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
Fix Released
High
Emilien Macchi

Bug Description

Custom roles nodes don't get iptables rules configured. I am doing a deployment with a custom ServiceApi role that contains services moved from the Controller role.

This is the role_data.yaml: http://paste.openstack.org/show/584561/

As a result the controller nodes get service specific iptables rules set even though the services are not running on them while the serviceapi nodes don't get any service specific iptables rules set.

Expected result: the service specific iptables rules get set on the nodes where the service is running.

[root@overcloud-serviceapi-0 heat-admin]# iptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
neutron-openvswi-INPUT all -- 0.0.0.0/0 0.0.0.0/0
nova-api-INPUT all -- 0.0.0.0/0 0.0.0.0/0

Chain FORWARD (policy ACCEPT)
target prot opt source destination
neutron-filter-top all -- 0.0.0.0/0 0.0.0.0/0
neutron-openvswi-FORWARD all -- 0.0.0.0/0 0.0.0.0/0
nova-filter-top all -- 0.0.0.0/0 0.0.0.0/0
nova-api-FORWARD all -- 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
neutron-filter-top all -- 0.0.0.0/0 0.0.0.0/0
neutron-openvswi-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
nova-filter-top all -- 0.0.0.0/0 0.0.0.0/0
nova-api-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0

Chain neutron-filter-top (2 references)
target prot opt source destination
neutron-openvswi-local all -- 0.0.0.0/0 0.0.0.0/0

Chain neutron-openvswi-FORWARD (1 references)
target prot opt source destination

Chain neutron-openvswi-INPUT (1 references)
target prot opt source destination

Chain neutron-openvswi-OUTPUT (1 references)
target prot opt source destination

Chain neutron-openvswi-local (1 references)
target prot opt source destination

Chain neutron-openvswi-sg-chain (0 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0

Chain neutron-openvswi-sg-fallback (0 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* Default drop rule for unmatched traffic. */

Chain nova-api-FORWARD (1 references)
target prot opt source destination

Chain nova-api-INPUT (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.16.17.191 tcp dpt:8775

Chain nova-api-OUTPUT (1 references)
target prot opt source destination

Chain nova-api-local (1 references)
target prot opt source destination

Chain nova-filter-top (2 references)
target prot opt source destination
nova-api-local all -- 0.0.0.0/0 0.0.0.0/0

[root@overcloud-controller-0 heat-admin]# iptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8042 /* 100 aodh_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13042 /* 100 aodh_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8777 /* 100 ceilometer_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13777 /* 100 ceilometer_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8776 /* 100 cinder_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13776 /* 100 cinder_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 9292 /* 100 glance_api_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13292 /* 100 glance_api_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 9191 /* 100 glance_registry_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* 100 glance_registry_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8041 /* 100 gnocchi_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13041 /* 100 gnocchi_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8004 /* 100 heat_api_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13004 /* 100 heat_api_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8000 /* 100 heat_cfn_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13005 /* 100 heat_cfn_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8003 /* 100 heat_cloudwatch_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13003 /* 100 heat_cloudwatch_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 35357 /* 100 keystone_admin_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13357 /* 100 keystone_admin_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5000 /* 100 keystone_public_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13000 /* 100 keystone_public_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 9696 /* 100 neutron_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13696 /* 100 neutron_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8775 /* 100 nova_metadata_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* 100 nova_metadata_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 6080 /* 100 nova_novncproxy_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13080 /* 100 nova_novncproxy_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8774 /* 100 nova_osapi_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13774 /* 100 nova_osapi_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8080 /* 100 swift_proxy_server_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13808 /* 100 swift_proxy_server_haproxy_ssl */ state NEW

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Revision history for this message
Steven Hardy (shardy) wrote :

Ok this should be handled via the OS::TripleO::Services::TripleoFirewall "service" and associated puppet profile, sounds like we're not wiring the data from that in to the roles correctly

Changed in tripleo:
status: New → Triaged
importance: Undecided → High
milestone: none → ocata-1
Changed in tripleo:
assignee: nobody → Emilien Macchi (emilienm)
Revision history for this message
Emilien Macchi (emilienm) wrote :

Fixing the compute nodes : https://review.openstack.org/#/c/383029/

Changed in tripleo:
status: Triaged → In Progress
Changed in tripleo:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.