[2.20-45~juno] PortInUse error seen during SVM launch

Bug #1463745 reported by Ganesha HV
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R2.0
Fix Committed
Critical
Rudra Rugge
R2.1
Fix Committed
Critical
Rudra Rugge
R2.20
Fix Committed
Critical
Rudra Rugge
Trunk
Fix Committed
Critical
Rudra Rugge

Bug Description

Topology :

Config Nodes : [u'nodea35', u'nodea34']
Control Nodes : [u'nodea35', u'nodec53']
Compute Nodes : [u'nodec54', u'nodec55', u'nodec56']
Openstack Node : nodea34
WebUI Node : nodec53
Analytics Nodes : [u'nodea35', u'nodec53']

Steps:

1]. Tried launching 3 Transparent Service instances.
2]. The following error was seen in the compute-node(nodec54) and the SVM didn't spawn :

2015-06-09 23:28:22.204 1929 AUDIT nova.compute.manager [req-30e8c9c2-3f02-4d10-874c-29c772886f33 None] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Starting instance...
2015-06-09 23:28:22.220 1929 INFO nova.scheduler.client.report [-] Compute_service record updated for ('nodec54', 'nodec54')
2015-06-09 23:28:22.267 1929 WARNING nova.compute.resource_tracker [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Host field should not be set on the instance until resources have been claimed.
2015-06-09 23:28:22.267 1929 WARNING nova.compute.resource_tracker [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Node field should not be set on the instance until resources have been claimed.
2015-06-09 23:28:22.268 1929 AUDIT nova.compute.claims [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Attempting claim: memory 2048 MB, disk 40 GB
2015-06-09 23:28:22.268 1929 AUDIT nova.compute.claims [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Total memory: 31893 MB, used: 3584.00 MB
2015-06-09 23:28:22.268 1929 AUDIT nova.compute.claims [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] memory limit: 47839.50 MB, free: 44255.50 MB
2015-06-09 23:28:22.269 1929 AUDIT nova.compute.claims [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Total disk: 426 GB, used: 50.00 GB
2015-06-09 23:28:22.269 1929 AUDIT nova.compute.claims [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] disk limit not specified, defaulting to unlimited
2015-06-09 23:28:22.282 1929 AUDIT nova.compute.claims [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Claim successful
2015-06-09 23:28:22.397 1929 INFO nova.scheduler.client.report [-] Compute_service record updated for ('nodec54', 'nodec54')
2015-06-09 23:28:22.507 1929 INFO nova.scheduler.client.report [-] Compute_service record updated for ('nodec54', 'nodec54')
2015-06-09 23:28:22.576 1929 ERROR nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
2015-06-09 23:28:22.576 1929 TRACE nova.compute.manager Traceback (most recent call last):
2015-06-09 23:28:22.576 1929 TRACE nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1664, in _allocate_network_async
2015-06-09 23:28:22.576 1929 TRACE nova.compute.manager dhcp_options=dhcp_options)
2015-06-09 23:28:22.576 1929 TRACE nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 295, in allocate_for_instance
2015-06-09 23:28:22.576 1929 TRACE nova.compute.manager raise exception.PortInUse(port_id=request.port_id)
2015-06-09 23:28:22.576 1929 TRACE nova.compute.manager PortInUse: Port 98b05a08-01ad-4f1e-9588-182f02b7783b is still in use.
2015-06-09 23:28:22.576 1929 TRACE nova.compute.manager
2015-06-09 23:28:22.769 1929 INFO nova.virt.libvirt.driver [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Creating image
2015-06-09 23:28:23.203 1929 ERROR nova.compute.manager [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Instance failed to spawn
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Traceback (most recent call last):
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2264, in _build_resources
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] yield resources
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2134, in _build_and_run_instance
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] block_device_info=block_device_info)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2621, in spawn
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] write_to_disk=True)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4179, in _get_guest_xml
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] context)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3954, in _get_guest_config
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] for vif in network_info:
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 460, in __iter__
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] return self._sync_wrapper(fn, *args, **kwargs)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 451, in _sync_wrapper
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] self.wait()
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 489, in wait
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] self[:] = self._gt.wait()
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] return self._exit_event.wait()
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 120, in wait
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] current.throw(*self._exc)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] result = function(*args, **kwargs)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1664, in _allocate_network_async
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] dhcp_options=dhcp_options)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 295, in allocate_for_instance
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] raise exception.PortInUse(port_id=request.port_id)
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] PortInUse: Port 98b05a08-01ad-4f1e-9588-182f02b7783b is still in use.
2015-06-09 23:28:23.203 1929 TRACE nova.compute.manager [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23]
2015-06-09 23:28:23.204 1929 AUDIT nova.compute.manager [req-30e8c9c2-3f02-4d10-874c-29c772886f33 None] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Terminating instance
2015-06-09 23:28:23.386 1929 WARNING nova.virt.libvirt.driver [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] During wait destroy, instance disappeared.
2015-06-09 23:28:23.549 1929 INFO nova.virt.libvirt.driver [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Deleting instance files /var/lib/nova/instances/93bedf2d-0dfe-432e-8002-6958d2f61d23_del
2015-06-09 23:28:23.549 1929 INFO nova.virt.libvirt.driver [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Deletion of /var/lib/nova/instances/93bedf2d-0dfe-432e-8002-6958d2f61d23_del complete
2015-06-09 23:28:23.668 1929 INFO nova.scheduler.client.report [-] Compute_service record updated for ('nodec54', 'nodec54')
2015-06-09 23:28:23.722 1929 INFO nova.compute.manager [-] [instance: b9e158a1-782a-4af2-a2a6-3196ed81ce86] VM Started (Lifecycle Event)
2015-06-09 23:28:23.726 1929 INFO nova.virt.libvirt.driver [-] [instance: b9e158a1-782a-4af2-a2a6-3196ed81ce86] Instance spawned successfully.
2015-06-09 23:28:23.920 1929 INFO nova.compute.manager [-] [instance: b9e158a1-782a-4af2-a2a6-3196ed81ce86] During sync_power_state the instance has a pending task (spawning). Skip.
2015-06-09 23:28:49.557 1929 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-06-09 23:28:49.796 1929 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 31893, total allocated virtual ram (MB): 5632
2015-06-09 23:28:49.797 1929 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 336
2015-06-09 23:28:49.797 1929 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 4, total allocated vcpus: 3
2015-06-09 23:28:49.797 1929 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-06-09 23:28:49.814 1929 INFO nova.scheduler.client.report [-] Compute_service record updated for ('nodec54', 'nodec54')
2015-06-09 23:28:49.815 1929 INFO nova.compute.resource_tracker [-] Compute_service record updated for nodec54:nodec54
2015-06-09 23:28:50.237 1929 AUDIT nova.compute.manager [req-8aa925be-f817-4e8e-b82f-1bf9b3ff8f73 None] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Terminating instance
2015-06-09 23:28:50.240 1929 WARNING nova.virt.libvirt.driver [-] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] During wait destroy, instance disappeared.
2015-06-09 23:28:50.372 1929 INFO nova.virt.libvirt.driver [req-8aa925be-f817-4e8e-b82f-1bf9b3ff8f73 None] [instance: 93bedf2d-0dfe-432e-8002-6958d2f61d23] Deletion of /var/lib/nova/instances/93bedf2d-0dfe-432e-8002-6958d2f61d23_del complete

3]. The API server still shows the VM, though it it deleted from nova:

root@nodea35:~# curl -u admin:contrail123 http://127.0.0.1:8095/virtual-machines | python -m json.tool
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
100 400 100 400 0 0 125k 0 --:--:-- --:--:-- --:--:-- 195k
{
    "virtual-machines": [
        {
            "fq_name": [
                "05364816-1118-499d-afee-da93f86d02fb"
            ],
            "href": "http://127.0.0.1:8095/virtual-machine/05364816-1118-499d-afee-da93f86d02fb",
            "uuid": "05364816-1118-499d-afee-da93f86d02fb"
        },
        {
            "fq_name": [
                "93bedf2d-0dfe-432e-8002-6958d2f61d23"
            ],
            "href": "http://127.0.0.1:8095/virtual-machine/93bedf2d-0dfe-432e-8002-6958d2f61d23",
            "uuid": "93bedf2d-0dfe-432e-8002-6958d2f61d23"
        }
    ]
}

root@nodea35:~# neutron port-list

root@nodea35:~# neutron port-list --all_tenants

The setup is intact.

Logs:

/cs-shared/test_runs/nodea35/2015_06_09_14_53_40

Ganesha HV (ganeshahv)
tags: added: blocker regression
Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] master

Review in progress for https://review.opencontrail.org/11465
Submitter: Rudra Rugge (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R2.20

Review in progress for https://review.opencontrail.org/11466
Submitter: Rudra Rugge (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] master

Review in progress for https://review.opencontrail.org/11465
Submitter: Rudra Rugge (<email address hidden>)

information type: Proprietary → Public
Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/11466
Committed: http://github.org/Juniper/contrail-controller/commit/4073bd0ffc1a1fdcb2a46fc8bef1be59f6f57025
Submitter: Zuul
Branch: R2.20

commit 4073bd0ffc1a1fdcb2a46fc8bef1be59f6f57025
Author: Rudra Rugge <email address hidden>
Date: Wed Jun 10 11:10:22 2015 -0700

Reauthenicate nova token and cleanup service vm

Nova reauthenticate needs to be done when Unauthorized
exception is raised. This exception needs to be raised
to the caller.

Cleanup of service VM objet from VNC database in case
of nova error conditions.

Change-Id: I6030b48441eee0a511f61839af913a5d63893d81
Closes-Bug: #1463745

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote :

Reviewed: https://review.opencontrail.org/11465
Committed: http://github.org/Juniper/contrail-controller/commit/a6f101fd1bf92d9a38f936e95236ae82e44539d0
Submitter: Zuul
Branch: master

commit a6f101fd1bf92d9a38f936e95236ae82e44539d0
Author: Rudra Rugge <email address hidden>
Date: Wed Jun 10 11:10:22 2015 -0700

Reauthenicate nova token and cleanup service vm

Nova reauthenticate needs to be done when Unauthorized
exception is raised. This exception needs to be raised
to the caller.

Cleanup of service VM objet from VNC database in case
of nova error conditions.

Change-Id: I6030b48441eee0a511f61839af913a5d63893d81
Closes-Bug: #1463745

Revision history for this message
Ganesha HV (ganeshahv) wrote :
Download full text (6.6 KiB)

Seeing the same issue in 2.20-55~icehouse

Setup
=====
Login to 10.84.34.6(root/c0ntrail123)
Then login to 192.168.0.97(single-node setup)

Traceback-logs
=============
2015-06-18 00:56:49.403 2141 AUDIT nova.compute.claims [req-19cf4c93-2ce2-4a80-8782-916627977825 6ccc4bf14a7642eb9db11a1e63e696f3 ef6912a479b44df3824914f8668ba70b] [instance: 9a52be1e-5fdc-4980-b773-a8b3d9ae676b] Claim successful
2015-06-18 00:56:49.841 2141 ERROR nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
2015-06-18 00:56:49.841 2141 TRACE nova.compute.manager Traceback (most recent call last):
2015-06-18 00:56:49.841 2141 TRACE nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1530, in _allocate_network_async
2015-06-18 00:56:49.841 2141 TRACE nova.compute.manager dhcp_options=dhcp_options)
2015-06-18 00:56:49.841 2141 TRACE nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 241, in allocate_for_instance
2015-06-18 00:56:49.841 2141 TRACE nova.compute.manager raise exception.PortInUse(port_id=port_id)
2015-06-18 00:56:49.841 2141 TRACE nova.compute.manager PortInUse: Port 3a47c037-77c2-46e8-9cb0-833fdfb2880d is still in use.
2015-06-18 00:56:49.841 2141 TRACE nova.compute.manager
2015-06-18 00:56:50.347 2141 INFO nova.virt.libvirt.driver [req-e6cee510-bf62-4492-8ab7-473a4535dcab 0688e1f4f3a04ed7a0d9e26d1057391e ef6912a479b44df3824914f8668ba70b] [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] Creating image
2015-06-18 00:56:50.500 2141 INFO nova.virt.libvirt.driver [req-19cf4c93-2ce2-4a80-8782-916627977825 6ccc4bf14a7642eb9db11a1e63e696f3 ef6912a479b44df3824914f8668ba70b] [instance: 9a52be1e-5fdc-4980-b773-a8b3d9ae676b] Creating image
2015-06-18 00:56:50.616 2141 ERROR nova.compute.manager [req-e6cee510-bf62-4492-8ab7-473a4535dcab 0688e1f4f3a04ed7a0d9e26d1057391e ef6912a479b44df3824914f8668ba70b] [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] Instance failed to spawn
2015-06-18 00:56:50.616 2141 TRACE nova.compute.manager [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] Traceback (most recent call last):
2015-06-18 00:56:50.616 2141 TRACE nova.compute.manager [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1740, in _spawn
2015-06-18 00:56:50.616 2141 TRACE nova.compute.manager [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] block_device_info)
2015-06-18 00:56:50.616 2141 TRACE nova.compute.manager [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2290, in spawn
2015-06-18 00:56:50.616 2141 TRACE nova.compute.manager [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] admin_pass=admin_password)
2015-06-18 00:56:50.616 2141 TRACE nova.compute.manager [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2760, in _create_image
2015-06-18 00:56:50.616 2141 TRACE nova.compute.manager [instance: 555f0d28-d55d-4d57-9466-1563f5a0aafb] instance, network_info, admin_pass, files, suffix)
2015-06-18 00:56:50.616 2141 TRACE nova.com...

Read more...

tags: added: releasenote
tags: removed: blocker
Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] master

Review in progress for https://review.opencontrail.org/11977
Submitter: Rudra Rugge (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R2.20

Review in progress for https://review.opencontrail.org/11978
Submitter: Rudra Rugge (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/11978
Committed: http://github.org/Juniper/contrail-controller/commit/25b696df08757b71e75288d37dfe78cae5abd571
Submitter: Zuul
Branch: R2.20

commit 25b696df08757b71e75288d37dfe78cae5abd571
Author: Rudra Rugge <email address hidden>
Date: Tue Jun 23 12:26:03 2015 -0700

Port in use error caused by gevent threads

If the first launch of a service VM coincides with the
timer for service instance check then there is a possibility
that the same port might be used to launch service VMs.
Added mutual exclusion to ensure that the timer check only
happens after the first launch.

Change-Id: Ieef15683f20e08cd1df221ff17a0c9fed4cf6b89
Closes-Bug: #1463745

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote :

Reviewed: https://review.opencontrail.org/11977
Committed: http://github.org/Juniper/contrail-controller/commit/c65a703f3e677accff3ad4f44444581d064b9fc5
Submitter: Zuul
Branch: master

commit c65a703f3e677accff3ad4f44444581d064b9fc5
Author: Rudra Rugge <email address hidden>
Date: Tue Jun 23 12:26:03 2015 -0700

Port in use error caused by gevent threads

If the first launch of a service VM coincides with the
timer for service instance check then there is a possibility
that the same port might be used to launch service VMs.
Added mutual exclusion to ensure that the timer check only
happens after the first launch.

Change-Id: Ieef15683f20e08cd1df221ff17a0c9fed4cf6b89
Closes-Bug: #1463745

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R2.0

Review in progress for https://review.opencontrail.org/12489
Submitter: Rudra Rugge (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R2.1

Review in progress for https://review.opencontrail.org/12490
Submitter: Rudra Rugge (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/12489
Committed: http://github.org/Juniper/contrail-controller/commit/015f061e70315a7b178f52980bcc2ea1518e0d56
Submitter: Zuul
Branch: R2.0

commit 015f061e70315a7b178f52980bcc2ea1518e0d56
Author: Rudra Rugge <email address hidden>
Date: Mon Jul 20 10:26:09 2015 -0700

Reauthenicate nova token

Nova reauthenticate needs to be done when Unauthorized
exception is raised. This exception needs to be raised
to the caller.

Change-Id: Ie7f94e53c8b22b5880b15b2856781a49b27bdd5f
Closes-Bug: #1463745

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote :

Reviewed: https://review.opencontrail.org/12490
Committed: http://github.org/Juniper/contrail-controller/commit/6e885f996e02c39e4333b00aac09a264c5986ffb
Submitter: Zuul
Branch: R2.1

commit 6e885f996e02c39e4333b00aac09a264c5986ffb
Author: Rudra Rugge <email address hidden>
Date: Mon Jul 20 10:26:09 2015 -0700

Reauthenicate nova token

Nova reauthenticate needs to be done when Unauthorized
exception is raised. This exception needs to be raised
to the caller.

Change-Id: Ie7f94e53c8b22b5880b15b2856781a49b27bdd5f
Closes-Bug: #1463745

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.