Custom roles nodes don't get iptables rules configured
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
tripleo |
Fix Released
|
High
|
Emilien Macchi |
Bug Description
Custom roles nodes don't get iptables rules configured. I am doing a deployment with a custom ServiceApi role that contains services moved from the Controller role.
This is the role_data.yaml: http://
As a result the controller nodes get service specific iptables rules set even though the services are not running on them while the serviceapi nodes don't get any service specific iptables rules set.
Expected result: the service specific iptables rules get set on the nodes where the service is running.
[root@overcloud
Chain INPUT (policy ACCEPT)
target prot opt source destination
neutron-
nova-api-INPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT)
target prot opt source destination
neutron-filter-top all -- 0.0.0.0/0 0.0.0.0/0
neutron-
nova-filter-top all -- 0.0.0.0/0 0.0.0.0/0
nova-api-FORWARD all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
neutron-filter-top all -- 0.0.0.0/0 0.0.0.0/0
neutron-
nova-filter-top all -- 0.0.0.0/0 0.0.0.0/0
nova-api-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain neutron-filter-top (2 references)
target prot opt source destination
neutron-
Chain neutron-
target prot opt source destination
Chain neutron-
target prot opt source destination
Chain neutron-
target prot opt source destination
Chain neutron-
target prot opt source destination
Chain neutron-
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain neutron-
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* Default drop rule for unmatched traffic. */
Chain nova-api-FORWARD (1 references)
target prot opt source destination
Chain nova-api-INPUT (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.16.17.191 tcp dpt:8775
Chain nova-api-OUTPUT (1 references)
target prot opt source destination
Chain nova-api-local (1 references)
target prot opt source destination
Chain nova-filter-top (2 references)
target prot opt source destination
nova-api-local all -- 0.0.0.0/0 0.0.0.0/0
[root@overcloud
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8042 /* 100 aodh_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13042 /* 100 aodh_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8777 /* 100 ceilometer_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13777 /* 100 ceilometer_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8776 /* 100 cinder_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13776 /* 100 cinder_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 9292 /* 100 glance_api_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13292 /* 100 glance_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 9191 /* 100 glance_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* 100 glance_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8041 /* 100 gnocchi_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13041 /* 100 gnocchi_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8004 /* 100 heat_api_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13004 /* 100 heat_api_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8000 /* 100 heat_cfn_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13005 /* 100 heat_cfn_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8003 /* 100 heat_cloudwatch
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13003 /* 100 heat_cloudwatch
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 35357 /* 100 keystone_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13357 /* 100 keystone_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5000 /* 100 keystone_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13000 /* 100 keystone_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 9696 /* 100 neutron_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13696 /* 100 neutron_haproxy_ssl */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8775 /* 100 nova_metadata_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* 100 nova_metadata_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 6080 /* 100 nova_novncproxy
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13080 /* 100 nova_novncproxy
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8774 /* 100 nova_osapi_haproxy */ state NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13774 /* 100 nova_osapi_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 8080 /* 100 swift_proxy_
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 13808 /* 100 swift_proxy_
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Changed in tripleo: | |
assignee: | nobody → Emilien Macchi (emilienm) |
Changed in tripleo: | |
status: | In Progress → Fix Released |
Ok this should be handled via the OS::TripleO: :Services: :TripleoFirewal l "service" and associated puppet profile, sounds like we're not wiring the data from that in to the roles correctly