nova-api crashes when using ipv6-address for metadata API

Bug #1340641 reported by Ville Salmela
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Opinion
Low
Unassigned

Bug Description

I'm doing openstack icehouse controller installation inside virtualbox with ipv6 configurations when I'm installing nova.

When I use ipv6 address for the metadata API (metadata_listen = 2001:db8:0::1, metadata_host = 2001:db8:0::1), nova-api crashes soon after launching and with ipv4 everything seems to be running like charm(metadata_listen = 198.168.0.1, metadata_host = 198.168.0.1).

e.g. when i restart my nova processes and run 'nova list' command twice as root following things occurs:

# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

# nova list
ERROR: HTTPConnectionPool(host='ctrl', port=8774): Max retries exceeded with url: /v2/d117e271b78248de8a26e572197fd149/servers/detail (Caused by <class 'socket.error'>: [Errno 111] Connection refused)

Here is the trace from nova-api.log:

2014-05-16 20:41:28.602 22728 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-05-16 20:41:28.646 22728 DEBUG nova.openstack.common.processutils [-] Result was 2 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-05-16 20:41:28.646 22728 DEBUG nova.openstack.common.processutils [-] ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'iptables-restore', '-c'] failed. Retrying. execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:199
2014-05-16 20:41:30.278 22728 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-05-16 20:41:30.348 22728 DEBUG nova.openstack.common.processutils [-] Result was 2 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-05-16 20:41:30.348 22728 DEBUG nova.openstack.common.lockutils [-] Released file lock "iptables" at /run/lock/nova/nova-iptables lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:210
2014-05-16 20:41:30.349 22728 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_apply" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:252
2014-05-16 20:41:30.349 22728 CRITICAL nova [-] ProcessExecutionError: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c
Exit code: 2
Stdout: ''
Stderr: "iptables-restore v1.4.21: host/network `::1' not found\nError occurred at line: 17\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n"
2014-05-16 20:41:30.349 22728 TRACE nova Traceback (most recent call last):
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/bin/nova-api", line 10, in <module>
2014-05-16 20:41:30.349 22728 TRACE nova sys.exit(main())
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/cmd/api.py", line 53, in main
2014-05-16 20:41:30.349 22728 TRACE nova server = service.WSGIService(api, use_ssl=should_use_ssl)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 329, in __init__
2014-05-16 20:41:30.349 22728 TRACE nova self.manager = self._get_manager()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 373, in _get_manager
2014-05-16 20:41:30.349 22728 TRACE nova return manager_class()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/api/manager.py", line 30, in __init__
2014-05-16 20:41:30.349 22728 TRACE nova self.network_driver.metadata_accept()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 660, in metadata_accept
2014-05-16 20:41:30.349 22728 TRACE nova iptables_manager.apply()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 428, in apply
2014-05-16 20:41:30.349 22728 TRACE nova self._apply()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 249, in inner
2014-05-16 20:41:30.349 22728 TRACE nova return f(*args, **kwargs)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 457, in _apply
2014-05-16 20:41:30.349 22728 TRACE nova attempts=5)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1205, in _execute
2014-05-16 20:41:30.349 22728 TRACE nova return utils.execute(*cmd, **kwargs)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 164, in execute
2014-05-16 20:41:30.349 22728 TRACE nova return processutils.execute(*cmd, **kwargs)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 193, in execute
2014-05-16 20:41:30.349 22728 TRACE nova cmd=' '.join(cmd))
2014-05-16 20:41:30.349 22728 TRACE nova ProcessExecutionError: Unexpected error while running command.
2014-05-16 20:41:30.349 22728 TRACE nova Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c
2014-05-16 20:41:30.349 22728 TRACE nova Exit code: 2
2014-05-16 20:41:30.349 22728 TRACE nova Stdout: ''
2014-05-16 20:41:30.349 22728 TRACE nova Stderr: "iptables-restore v1.4.21: host/network `::1' not found\nError occurred at line: 17\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n"
2014-05-16 20:41:30.349 22728 TRACE nova
2014-05-16 20:41:30.496 22854 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-05-16 20:41:30.501 22854 INFO nova.wsgi [-] Stopping WSGI server.
2014-05-16 20:41:30.498 22828 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-05-16 20:41:30.502 22828 INFO nova.wsgi [-] Stopping WSGI server.

My nova.conf file is following:

[DEFAULT]

use_ipv6 = True
my_ip = 2001:db8:0::1

rpc_backend = rabbit
rabbit_host = ctrl

# verbose = True
debug = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /run/lock/nova

s3_host = ctrl
ec2_host = ctrl
ec2_dmz_host = ctrl
cc_host = ctrl

ec2_url = http://ctrl:8773/services/Cloud
nova_url = http://ctrl:8774/v1.1/

api_paste_config = /etc/nova/api-paste.ini

root_helper = sudo nova-rootwrap /etc/nova/rootwrap.conf

resume_guests_state_on_host_boot = True

osapi_compute_listen = 2001:db8:0::1
osapi_compute_listen_port = 8774

# Scheduler
# scheduler_driver = nova.scheduler.simple.SimpleScheduler
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler

# Metadata stuff
metadata_listen = 2001:db8:0::1
metadata_host = 2001:db8:0::1
metadata_port = 8775
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = metasecret13

# Auth
use_deprecated_auth = false
auth_strategy = keystone
keystone_ec2_url = http://ctrl:5000/v2.0/ec2tokens

# Imaging service
glance_api_servers = ctrl:9292
image_service = nova.image.glance.GlanceImageService

# INSTANCE DISK BACKEND
libvirt_images_type = lvm
libvirt_images_volume_group = nova-local
libvirt_sparse_logical_volumes = false

# VNC configuration - Dual-Stacked - DISABLED, go for SPICE instead!
vnc_enabled = False
novnc_enabled = False
# novncproxy_base_url = http://ctrl:6080/vnc_auto.html
# novncproxy_host = ::
# novncproxy_port = 6080

# NETWORK - NEUTRON
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://ctrl:9696/
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = service_pass
neutron_admin_auth_url = http://ctrl:35357/v2.0/
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

# firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver

libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
# libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
# libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver

# Cinder
volume_api_class = nova.volume.cinder.API
osapi_volume_listen_port = 5900

# SPICE configuration - Dual-Stacked
[spice]
enabled = True
spicehtml5proxy_host = ::
html5proxy_base_url = http://ctrl:6082/spice_auto.html
keymap = en-us

[database]
connection = mysql://novaUser:novaPass@ctrl/nova

[keystone_authtoken]
auth_uri = http://ctrl:5000
auth_host = ctrl
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass

Tags: ipv6 nova-api
Ville Salmela (viossalm)
description: updated
Nadja Deininger (nadja)
Changed in nova:
assignee: nobody → Nadja Deininger (nadja)
Revision history for this message
Dan Albu (danalbu85) wrote :

i am using the same guide as Ville Salmela : https://gist.github.com/tmartinx/9177697

A workarround will be to use 127.0.0.1 aka localhost. For me that is the solution.
Now i hae other errors regarding this guide so maybe Salmela you can get in touch with me and lets figure out the problems :)

Revision history for this message
Ville Salmela (viossalm) wrote :

The problem seems to be that command "sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c" is trying to append iptables with ipv6 rules wihle it should be appending ip6tables.

Revision history for this message
Thiago Martins (martinx) wrote :

Soon as this BUG becomes fixed, I'll kiss IPv4 a goodbye! :-D

OpenStack Metadata needs support to work over IPv6. I don't think that just replacing iptables by ip6tables, will fix it.

Probably, the Metadata will work within the "fe80::" network and that will require more code re-factoring (I believe, I'm not a coder)...

Cheers!

Sean Dague (sdague)
Changed in nova:
importance: Undecided → Wishlist
status: New → Confirmed
Revision history for this message
Alexandre Ferland (admiralobvious) wrote :

Just hit this bug too. The bug is in nova/network/linux_net.py lines 447-449.

        s = [('iptables', self.ipv4)]
        if CONF.use_ipv6:
            s += [('ip6tables', self.ipv6)]

Basically, by default the command 'iptables' is used and when the use_ipv6 flag is True, the 'ip6tables' is only appended to the s list while I think it should actually replace the 'iptables' command.

I have thus fixed it by changing the line:
s += [('ip6tables', self.ipv6)]
to:
s = [('ip6tables', self.ipv6)]

Not sure if my fix has an other impact so YMMV.

Revision history for this message
Kurt Sussman (kls-2) wrote :

Back in the original log, there is this line:

Stderr: "iptables-restore v1.4.21: host/network `::1' not found\nError occurred at line: 17\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n"

It took me a while when I got pretty much the same error, but the real problem is that an error message has been appended to an iptables command. And the reason I've found every time is an unguarded config var that is missing but used anyway (that's where the 'not found' part comes from).

I'm new to OpenStack so maybe this is a dumb question, but I have searched and didn't find an answer. Is there a style guide that has a suggestion for guarding configuration variables? I'd love to fix at least the ones that have bitten me, but since this problem is so pervasive, it seems reasonable to have a policy for using config vars.

Maybe "At the point of use, if the var doesn't exist log an error" OR "on config load, verify that all required vars are set and use a default (?) if not."

Revision history for this message
Lukas Bezdicka (social-b) wrote :

This one is more interesting:

The proposed solution will break the user-data url. So for example there can be setup where we have ipv6 for infra and ipv4 for guests with only one ipv4 gateway. In that case we wouldn't have 169.254.169.254 but overall with neutron nova should not touch iptables at all :/

Revision history for this message
Markus Zoeller (markus_z) (mzoeller) wrote :

This bug report is pretty old. If you can reproduce it on a currently supported release [1], please reopen this report and add the steps to reproduce.

References:
http://releases.openstack.org/

Changed in nova:
assignee: Nadja Deininger (nadja) → nobody
status: Confirmed → Opinion
importance: Wishlist → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.