nova network-vif-plugged event fails with 400 from n-api

Bug #1298640 reported by Matt Riedemann
86
This bug affects 18 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
New
Undecided
Unassigned

Bug Description

Running Tempest in our internal CI we hit 3 test failures out of 1,971 tests, one was:

tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_snapshot_create_with_volume_in_use

This is running x86_64 RHEL 6.5 with the nova libvirt driver and neutron openvswitch.

We have neutron.conf configured for nova events:

[root@rhel62 ~]# cat /etc/neutron/neutron.conf | grep nova
# being used in conjunction with nova security groups
# allowed_rpc_exception_modules = neutron.openstack.common.exception, nova.exception
nova_url = http://192.168.4.7:8774/v2
nova_region_name = RegionOne
nova_admin_username = nova
nova_admin_tenant_id = 11b12b24a2bd4672b6332c41b654a438
nova_admin_password = nova
nova_admin_auth_url = http://192.168.4.7:5000/v2.0/

For one failure I'm seeing this in the neutron server log:

2014-03-16 05:57:36.619 8462 ERROR neutron.notifiers.nova [-] Failed to notify nova on events: [{'status': 'completed', 'tag': u'7ecd451e-2f14-4155-a237-3231f3770f1d', 'name': 'network-vif-plugged', 'server_uuid': u'2c368278-e9f4-4458-abdc-b158c5a079ea'}]
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova Traceback (most recent call last):
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/neutron/notifiers/nova.py", line 186, in send_events
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova batched_events)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/v1_1/contrib/server_external_events.py", line 39, in create
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova return_raw=True)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/base.py", line 152, in _create
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova _resp, body = self.api.client.post(url, body=body)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 286, in post
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova return self._cs_request(url, 'POST', **kwargs)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 260, in _cs_request
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova **kwargs)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 242, in _time_request
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova resp, body = self.request(url, method, **kwargs)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 236, in request
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova raise exceptions.from_response(resp, body, url, method)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova BadRequest: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 400)
2014-03-16 05:57:36.619 8462 TRACE neutron.notifiers.nova

I correlated that server uuid to the nova-api log error here:

2014-03-16 05:57:27.770 12207 INFO nova.osapi_compute.wsgi.server [req-56dabfa1-9abe-4301-91bc-8093ded6c29c cc61e644b9224dc6918241663551996b 2a33892b41b34180ab23151dfe610990] 192.168.4.9 "GET /v2/2a33892b41b34180ab23151dfe610990/servers/2c368278-e9f4-4458-abdc-b158c5a079ea HTTP/1.1" status: 200 len: 2211 time: 0.1646261
2014-03-16 05:57:36.615 12204 ERROR nova.api.openstack.wsgi [-] Exception handling resource: multi() got an unexpected keyword argument 'body'
2014-03-16 05:57:36.615 12204 TRACE nova.api.openstack.wsgi Traceback (most recent call last):
2014-03-16 05:57:36.615 12204 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 983, in _process_stack
2014-03-16 05:57:36.615 12204 TRACE nova.api.openstack.wsgi action_result = self.dispatch(meth, request, action_args)
2014-03-16 05:57:36.615 12204 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 1070, in dispatch
2014-03-16 05:57:36.615 12204 TRACE nova.api.openstack.wsgi return method(req=request, **action_args)
2014-03-16 05:57:36.615 12204 TRACE nova.api.openstack.wsgi TypeError: multi() got an unexpected keyword argument 'body'
2014-03-16 05:57:36.615 12204 TRACE nova.api.openstack.wsgi
2014-03-16 05:57:36.616 12204 INFO nova.osapi_compute.wsgi.server [-] 127.0.0.1 "POST /None/os-server-external-events HTTP/1.1" status: 400 len: 274 time: 0.0410700

Talking with Dan Smith in IRC it sounds like maybe neutron isn't putting the nova tenant ID into the request for some reason?

(4:15:57 PM) dansmith: mriedem: I think neutron sometimes gets the nova url from the context you pass it, so maybe it's related to that
...
(4:17:20 PM) dansmith: mriedem: right, but arosen did some work to have neutron determine the proper nova endpoint to call back to based on the context, in the case you've got two novas sharing a neutron

This is intermittent, but I do see the neutron server error showing up in community logstash, but only 7% failures:

http://goo.gl/up9utR

I'm not seeing the same error in logstash in nova-api logs, but that was only going back the last 48 hours.

Tags: network
Revision history for this message
Matt Riedemann (mriedem) wrote :

Oh and this is nova/neutron code from master as of 3/26, and novaclient is 2.17 and neutronclient is 2.3.4.

tags: added: network
Revision history for this message
Michael Kazakov (gnomino) wrote :

I have same bug and instances fails to run:
2014-04-07 14:16:08.829 28157 WARNING nova.virt.libvirt.driver [req-e2fa4e58-ccf8-452b-85df-8b3cf5843fa6 15ae9f4e6a6e4e4cbd51805501d13a31 e9dfbc896c244a29946f3c0171377548] Timeout waiting for vif plugging callback for instance 7de2e921-d367-4c35-8154-d4dbcc617da5
2014-04-07 14:16:09.817 28157 INFO nova.virt.libvirt.driver [req-e2fa4e58-ccf8-452b-85df-8b3cf5843fa6 15ae9f4e6a6e4e4cbd51805501d13a31 e9dfbc896c244a29946f3c0171377548] [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] Deleting instance files /var/lib/nova/instances/7de2e921-d367-4c35-8154-d4dbcc617da5
2014-04-07 14:16:09.818 28157 INFO nova.virt.libvirt.driver [req-e2fa4e58-ccf8-452b-85df-8b3cf5843fa6 15ae9f4e6a6e4e4cbd51805501d13a31 e9dfbc896c244a29946f3c0171377548] [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] Deletion of /var/lib/nova/instances/7de2e921-d367-4c35-8154-d4dbcc617da5 complete
2014-04-07 14:16:09.912 28157 ERROR nova.compute.manager [req-e2fa4e58-ccf8-452b-85df-8b3cf5843fa6 15ae9f4e6a6e4e4cbd51805501d13a31 e9dfbc896c244a29946f3c0171377548] [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] Instance failed to spawn
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] Traceback (most recent call last):
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1718, in _spawn
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] block_device_info)
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2251, in spawn
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] block_device_info)
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3654, in _create_domain_and_network
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] raise exception.VirtualInterfaceCreateException()
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5] VirtualInterfaceCreateException: Virtual Interface creation failed
2014-04-07 14:16:09.912 28157 TRACE nova.compute.manager [instance: 7de2e921-d367-4c35-8154-d4dbcc617da5]

Revision history for this message
Michael Kazakov (gnomino) wrote :

 DEBUG nova.api.openstack.wsgi [-] Action: 'multi', body: {"events": [{"status": "completed", "tag": "b79acdb0-711f-4df6-8d0b-c75e5f2d66da", "name": "network-vif-p
  lugged", "server_uuid": "c7a6f6bf-cfe5-476e-beee-9c63c380f7dc"}]
......

TypeError: multi() got an unexpected keyword argument 'body'

Revision history for this message
Matt Riedemann (mriedem) wrote :

@gnomino - are you running with the latest nova code on milestone-proposed? I haven't seen this in our internal CI since it was reported.

Revision history for this message
Michael Kazakov (gnomino) wrote :

I am running latest nova code in ubuntu 14.04 repo:

ii nova-api 1:2014.1~rc1-0ubuntu1 all OpenStack Compute - API frontend
ii nova-cert 1:2014.1~rc1-0ubuntu1 all OpenStack Compute - certificate management
ii nova-common 1:2014.1~rc1-0ubuntu1 all OpenStack Compute - common files
ii nova-conductor 1:2014.1~rc1-0ubuntu1 all OpenStack Compute - conductor service
ii nova-consoleauth 1:2014.1~rc1-0ubuntu1 all OpenStack Compute - Console Authenticator
ii nova-novncproxy 1:2014.1~rc1-0ubuntu1 all OpenStack Compute - NoVNC proxy
ii nova-scheduler 1:2014.1~rc1-0ubuntu1 all OpenStack Compute - virtual machine scheduler
ii python-nova 1:2014.1~rc1-0ubuntu1 all OpenStack Compute Python libraries
ii python-novaclient 1:2.17.0-0ubuntu1 all client library for OpenStack Compute API

Revision history for this message
Hirofumi Ichihara (ichihara-hirofumi) wrote :

I have same bug too. I confirmed nova, neutron RC1.

Revision history for this message
Michael Kazakov (gnomino) wrote :

Still present after nova updates to version 1:2014.1~rc2-0ubuntu1

Revision history for this message
Matt Riedemann (mriedem) wrote :

Is everyone hitting this intermittently or are you able to reproduce easily?

Revision history for this message
Matt Riedemann (mriedem) wrote :

Also, what version of novaclient/neutronclient/keystoneclient is everyone using?

Revision history for this message
Michael Kazakov (gnomino) wrote :

On every instance creation. Creation fails with error.

Revision history for this message
Michael Kazakov (gnomino) wrote :

Ubuntu 14.04
python-novaclient 1:2.17.0-0ubuntu1
python-neutronclient 1:2.3.4-0ubuntu1
python-keystoneclient 1:0.7.1-ubuntu1

Revision history for this message
Matt Riedemann (mriedem) wrote :

Michael, just to confirm, you're getting this on every boot request?

"BadRequest: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 400)"

Are you using devstack?

Revision history for this message
Michael Kazakov (gnomino) wrote :

Yes. on every boot request. I don't use devstack

Revision history for this message
Arun Thulasi (arunuke) wrote :

I am running into the same issue as well (RDO/Fedora 20/Icehouse). I am unable to delete older instances that have been marked as "ERROR" as well (ie, nova delete <instance name> does not remove them).

Revision history for this message
Matt Riedemann (mriedem) wrote :

Arun, you're looking for bug 1299139 for the problem with not being able to delete instances in error state. You have to restart the compute service to clean those up unfortunately.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Michael/Arun, what does this look like?

cat /etc/neutron/neutron.conf | grep nova

Need to make sure you have the nova config options set correctly in neutron.conf for callbacks.

Revision history for this message
Arun Thulasi (arunuke) wrote :

Matt's message #15 just needed a "elementary, my dear watson" to be complete. A nova-compute restart fixed the issue. Will rebuild the environment and try and get an answer for message #16 later tonight.

From what I remember, I have not specified any nova specific option in my neutron.conf file on the cloud or network controller nodes since I am not using the ML2 plugin. The custom set values are for neutron plugins, keystone credentials and databases and that could very well be the issue.

Do I need to set nova callbacks even if I am not using ML2, but using only openvswitch?

Revision history for this message
August Simonelli (augustsimonelli) wrote :

I'm seeing this on every boot as well. ubuntu 14.04 same client version as Michael above. None of my nova.conf's (ie compute, controller or network) have any uncommented nova options. I see things in there like

# nova_url = http://127.0.0.1:8774

which if that's a default could be "bad", right?

what nova options should be enabled and on which types of hosts?

Revision history for this message
Matt Riedemann (mriedem) wrote :

This is going to be a problem if you're using ML2 or not, this is something that needs to be configured in neutron.conf for boots to work with nova if you're using neutron at all. See the devstack patch upstream to configure this:

https://review.openstack.org/#/c/78044/

If you're using RDO you need to get a bug report opened against RDO. I know there was a patch upstream in the chef cookbooks on stackforge, I'd have to dig that up now. But basically anyone writing their own install scripts is going to have to adjust for this change like devstack did above.

Revision history for this message
Arun Thulasi (arunuke) wrote :

Enabling "debug" for neutron on the cloud controller node shows an attempt to communicate with 127.0.0.1 for notifications if it's not specifically set to the cloud controller's IP as the newer version of the guide recommends. However, I was under the assumption that it needs to be set only if neutron.conf sets nova notification changes to "True". I explicitly set both notifications to False on all my nodes' neutron.conf files, and yet I hit a failure when I try to boot with no new notifications on server.log for neutron on the cloud controller.

As a next step, I will try and set all the nova parameters within neutron.conf on all the nodes to see if that helps.

Revision history for this message
August Simonelli (augustsimonelli) wrote :

The install guide only documents these steps for the ML2 plugin. That might the source of some confusion. I'm testing the steps in that section for OVS and if all good will log a bug / fix for the guides.

Revision history for this message
August Simonelli (augustsimonelli) wrote :

when setting this on the controller
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774
nova_admin_username = nova
nova_admin_tenant_id = d7544deb4812454ca634ecee078b4073
nova_admin_password = novapass
nova_admin_auth_url = http://controller:35357/v2.0

i still see errors. should it be set on controllers, network and compute?

Revision history for this message
Michael Kazakov (gnomino) wrote :

My nova settings in neutron.conf is:

notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.168.215.1:8774
nova_region_name = RegionOne
nova_admin_username = nova
nova_admin_tenant_id = e9dfbc896c244a29946f3c0171377548
nova_admin_password = openstack
nova_admin_auth_url = http://192.168.215.1:5000/v2.0/

I am using OVS via ml2 core plugin. I have tried to use OVS neuton core plugin but received the same error.

Revision history for this message
August Simonelli (augustsimonelli) wrote :

I've set all the nova values in neutron.conf on all node types as:

notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = d7544deb4812454ca634ecee078b4073
nova_admin_password = novapass
nova_admin_auth_url = http://controller:35357/v2.0

using the "services" tenant as per the install guide notes for ml2.
I'm using OVS NOT ml2.

It still fails with:

2014-04-11 14:39:09.258 928 ERROR neutron.notifiers.nova [-] Failed to notify nova on events: [{'status': 'completed', 'tag': u'da8f1f36-df7a-4d78-aa14-4dbfd263a0b2', 'name': 'network-vif-unplugged', 'server_uuid': u'652de88a-fc31-4071-9ff8-f6213efd1225'}]
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova Traceback (most recent call last):
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py", line 186, in send_events
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova batched_events)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/contrib/server_external_events.py", line 39, in create
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova return_raw=True)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 152, in _create
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova _resp, body = self.api.client.post(url, body=body)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 286, in post
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova return self._cs_request(url, 'POST', **kwargs)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 260, in _cs_request
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova **kwargs)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 242, in _time_request
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova resp, body = self.request(url, method, **kwargs)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 236, in request
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova raise exceptions.from_response(resp, body, url, method)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova BadRequest: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 400)
2014-04-11 14:39:09.258 928 TRACE neutron.notifiers.nova

Revision history for this message
August Simonelli (augustsimonelli) wrote :

Sorry - corresponding api log output is:

2014-04-11 14:39:09.256 1391 ERROR nova.api.openstack.wsgi [-] Exception handling resource: multi() got an unexpected keyword argument 'body'
2014-04-11 14:39:09.256 1391 TRACE nova.api.openstack.wsgi Traceback (most recent call last):
2014-04-11 14:39:09.256 1391 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 983, in _process_stack
2014-04-11 14:39:09.256 1391 TRACE nova.api.openstack.wsgi action_result = self.dispatch(meth, request, action_args)
2014-04-11 14:39:09.256 1391 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 1070, in dispatch
2014-04-11 14:39:09.256 1391 TRACE nova.api.openstack.wsgi return method(req=request, **action_args)
2014-04-11 14:39:09.256 1391 TRACE nova.api.openstack.wsgi TypeError: multi() got an unexpected keyword argument 'body'
2014-04-11 14:39:09.256 1391 TRACE nova.api.openstack.wsgi

Revision history for this message
Henry Gessau (gessau) wrote :

Check if specifying the API version works.
See https://review.openstack.org/86646

Revision history for this message
Thiago Martins (martinx) wrote :

Henry,

Now it is working!

I replaced this line:

nova_url = http://dsuaa-1.quilombas.com:8774

For this one:

nova_url = http://dsuaa-1.quilombas.com:8774/v2

Now it works!

First Instance created using IceHouse! YAY! :-P

I'm using ML2 with Flat network, Ubuntu 14.04 upgraded today.

Revision history for this message
Michael Kazakov (gnomino) wrote :

After changing nova_url to http://192.168.215.1:8774/v2/ error disappeared but instances still fails to run with same error:

VirtualInterfaceCreateException: Virtual Interface creation failed

I think that this problem is solved. A problem with running instances rooted in something else.

Revision history for this message
Thiago Martins (martinx) wrote :
Revision history for this message
Michael Kazakov (gnomino) wrote :

Instances runs well, many thanks!

Revision history for this message
August Simonelli (augustsimonelli) wrote :

me too. looks better with /v2/ and me properly setting nova values in neutron configs on controller.
thanks for the help!

Revision history for this message
Matt Riedemann (mriedem) wrote :

If you're using puppet, track bug 1306694.

Revision history for this message
Arun Thulasi (arunuke) wrote :

After adding the nova notification changes to all the neutron.conf files (controller, network and compute) I am able to successfully boot instances as well - thanks to all the folks who helped.

I am still having some connectivity issues where dhcp requests from the compute nodes are not reaching the network node. I see my bootp requests coming out of br-eth1 on the compute node, but I don't see them coming back up on the br-eth1 node on the network node. Will keep this list posted on what I find out.

Revision history for this message
Matt Riedemann (mriedem) wrote :

For other network issues, make sure you're not hitting bug 1284718 or bug 1302796.

Revision history for this message
Arun Thulasi (arunuke) wrote :

Thanks for the pointers Matt. I am afraid I am not seeing either of these issues. The trouble I am having is that packets are not going out of the compute host. I see the packets on br-eth1 of the compute node, but not on br-eth1 of the network node.

When I try to ping a VM from compute host 'A' to another VM on compute host 'B", I see the ICMP packets on br-eth1 for host-A, but no on host "B".

I am still using VLAN (not GRE) and OpenVSwitch (not ML2) on Icehouse/Fedora20. The guide clearly mentions that ML2 is tested while OpenVSwitch is not, so I am not sure if it's related to that.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Arun, sounds like a neutron issue, there was a change related to vif and firewalls on icehouse milestone-proposed, maybe you're having issues there? I thought neutron was tested with openvswitch ML2 mechanism driver in the gate?

Revision history for this message
Matt Riedemann (mriedem) wrote :

Arun, bug 1297469 is the vif and firewall related issue I was thinking of.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.openstack.org/86646
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=c09a14089a5ca7cd3093895ee0248876499a6d06
Submitter: Jenkins
Branch: master

commit c09a14089a5ca7cd3093895ee0248876499a6d06
Author: Dan Prince <email address hidden>
Date: Thu Apr 10 12:40:13 2014 -0400

    Make default nova_url use a version

    The default nova_url for neutron is missing an API
    version number. This can cause requests to fail
    because the Nova /versions API cannot respond
    to Neutron notification requests.

    It seems reasonable for the default value to
    at least have a chance at being correct so
    this patch upgrades the default Nova API url to
    use the Nova 'v2' API.

    Related-bug: #1298640
    Change-Id: Ib1449de84fbc01fb704ebfe4a016ac8f4932be96

Revision history for this message
Arun Thulasi (arunuke) wrote :

Matt - thanks again. A quick update from me. I was able to resolve one issue when I noticed my physical interface (eth1, on top of which i have my bridge br-eth1) was marked as down. I was able to address that and now I see packets moving between hosts.
On system boot, dhcp is able to assign an address and it is configured as expected on the compute VMs. Checking the interface status using ovs-ofctl helped me in finding that the link was down.

However, when I try to ping the address from my network node, it does not get past int-br-eth1 on the compute host. I will keep this list posted on what I find.

Revision history for this message
Openstack Gerrit (openstack-gerrit) wrote : Related fix proposed to neutron (stable/icehouse)

Related fix proposed to branch: stable/icehouse
Review: https://review.openstack.org/92387

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/icehouse)

Reviewed: https://review.openstack.org/92387
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=d1ab56d8c4e19c2b4b4bf643386136b10ef5bcd7
Submitter: Jenkins
Branch: stable/icehouse

commit d1ab56d8c4e19c2b4b4bf643386136b10ef5bcd7
Author: Dan Prince <email address hidden>
Date: Thu Apr 10 12:40:13 2014 -0400

    Make default nova_url use a version

    The default nova_url for neutron is missing an API
    version number. This can cause requests to fail
    because the Nova /versions API cannot respond
    to Neutron notification requests.

    It seems reasonable for the default value to
    at least have a chance at being correct so
    this patch upgrades the default Nova API url to
    use the Nova 'v2' API.

    Related-bug: #1298640
    Change-Id: Ib1449de84fbc01fb704ebfe4a016ac8f4932be96
    (cherry picked from commit c09a14089a5ca7cd3093895ee0248876499a6d06)

tags: added: in-stable-icehouse
Alan Pevec (apevec)
tags: removed: in-stable-icehouse
Revision history for this message
Matt Riedemann (mriedem) wrote :

Looks like bug 1306727 has a fix for the "TypeError: multi() got an unexpected keyword argument 'body'" issue, so between that and the config issues for neutron/nova events, I think we can probably just close this out as fixed by other patches.

no longer affects: neutron
Revision history for this message
hugo deng (dengqm) wrote :

hi everybody,when i use icehouse i got the same error:
HTTP/1.1 400 Bad Request; content: [{"badRequest": {"message": "The server could not comply with the request since it is either malformed or otherwise incorrect.", "code": 400}}]

and my configuration like this:
nova_url = http://controller:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 9ef1294ad7f049218276b90cefa0efb2
nova_admin_password = 111111
nova_admin_auth_url = http://controller:35357/v2.0

the "nova_url" config is right,why got this error?and what i need do?

Revision history for this message
Baohua Yang (yangbaohua) wrote :

The same error happens with devstack + stable/juno branch.

The nova_url is correct, while the vm cannot spawn out successfully.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.