Getting "Block Device Mapping is invalid" error while creating VM through horizon in newton

Bug #1708647 reported by musharani
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R4.0
Fix Committed
Medium
musharani
Trunk
Fix Committed
Medium
musharani

Bug Description

While creating a vm through horizon getting an error "Error: Build of instance 2dec7bc3-0038-4ac0-8aa9-660b7986ab53 aborted: Block Device Mapping is Invalid"

This issue is seen when VMs are getting created only through horizon.

VMs are getting created without any issues if we use cli.

BUILD -> 4.0-27 newton build on a single node setup.

vm-test is created through horizon.

root@nodek12:~# nova list
+--------------------------------------+---------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+------------------+
| ab3278c3-c16a-4958-a377-0f5a62308f6c | host1 | ACTIVE | - | Running | test_vn=10.1.1.4 |
| aad17e67-48d0-422e-a4f7-23758eacea04 | vm-test | ERROR | - | NOSTATE | |
| 14e89aa9-accf-4f6f-8f2a-5a43e76272f9 | vm_cli | ACTIVE | - | Running | test_vn=10.1.1.3 |
+--------------------------------------+---------+--------+------------+-------------+------------------+

nova-compute log:
-----------------
2017-08-03 17:46:42.649 4764 WARNING nova.compute.manager [req-aba9bcbb-edc6-4a8b-8d9c-c09464ecd3d2 5fa4df51d9f44642a31c981426057141 e282a793402c4b44a31f37de0532d11c - - -] Volume id: 3ae85a4d-41f9-4e31-84f4-c3f3d80cb80e finished being created but its status is error.
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [req-aba9bcbb-edc6-4a8b-8d9c-c09464ecd3d2 5fa4df51d9f44642a31c981426057141 e282a793402c4b44a31f37de0532d11c - - -] [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] Instance failed block device setup
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] Traceback (most recent call last):
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1582, in _prep_block_device
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] wait_func=self._await_block_device_map_created)
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 514, in attach_block_devices
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] map(_log_and_attach, block_device_mapping)
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 512, in _log_and_attach
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] bdm.attach(*attach_args, **attach_kwargs)
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 404, in attach
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] self._call_wait_func(context, wait_func, volume_api, vol['id'])
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 363, in _call_wait_func
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] {'volume_id': volume_id, 'exc': exc})
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] self.force_reraise()
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] six.reraise(self.type_, self.value, self.tb)
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 353, in _call_wait_func
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] wait_func(context, volume_id)
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1254, in _await_block_device_map_created
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] volume_status=volume_status)
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] VolumeNotCreated: Volume 3ae85a4d-41f9-4e31-84f4-c3f3d80cb80e did not finish being created even after we waited 3 seconds or 2 attempts. And its status is error.
2017-08-03 17:46:42.650 4764 ERROR nova.compute.manager [instance: 480d0a35-5843-498e-a418-9e08a47e0eef]
2017-08-03 17:46:42.671 4764 DEBUG nova.compute.claims [req-aba9bcbb-edc6-4a8b-8d9c-c09464ecd3d2 5fa4df51d9f44642a31c981426057141 e282a793402c4b44a31f37de0532d11c - - -] [instance: 480d0a35-5843-498e-a418-9e08a47e0eef] Aborting claim: [Claim: 512 MB memory, 1 GB disk] abort /usr/lib/python2.7/dist-packages/nova/compute/claims.py:123

cinder-scheduler.log:
====================
2017-08-03 11:37:13.265 20114 ERROR oslo_service.service return amqp_method(self, args)
2017-08-03 11:37:13.265 20114 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/channel.py", line 243, in _close
2017-08-03 11:37:13.265 20114 ERROR oslo_service.service reply_code, reply_text, (class_id, method_id), ChannelError,
2017-08-03 11:37:13.265 20114 ERROR oslo_service.service PreconditionFailed: Exchange.declare: (406) PRECONDITION_FAILED - inequivalent arg 'durable' for exchange 'openstack' in vhost '/': received 'false' but current is 'true'
2017-08-03 11:37:13.265 20114 ERROR oslo_service.service
2017-08-03 11:37:13.271 20114 CRITICAL cinder [req-fedfebc6-d7a1-44a0-8d24-0e815e0fa7b3 - - - - -] AttributeError: 'NoneType' object has no attribute 'cleanup'
2017-08-03 11:37:13.271 20114 ERROR cinder Traceback (most recent call last):
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/bin/cinder-scheduler", line 10, in <module>
2017-08-03 11:37:13.271 20114 ERROR cinder sys.exit(main())
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/cmd/scheduler.py", line 56, in main
2017-08-03 11:37:13.271 20114 ERROR cinder service.wait()
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 613, in wait
2017-08-03 11:37:13.271 20114 ERROR cinder _launcher.wait()
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 333, in wait
2017-08-03 11:37:13.271 20114 ERROR cinder super(ServiceLauncher, self).wait()
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 236, in wait
2017-08-03 11:37:13.271 20114 ERROR cinder self.services.wait()
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 687, in wait
2017-08-03 11:37:13.271 20114 ERROR cinder service.wait()
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 419, in wait
2017-08-03 11:37:13.271 20114 ERROR cinder self.rpcserver.wait()
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 268, in wrapper
2017-08-03 11:37:13.271 20114 ERROR cinder log_after, timeout_timer)
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 188, in run_once
2017-08-03 11:37:13.271 20114 ERROR cinder post_fn = fn()
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 267, in <lambda>
2017-08-03 11:37:13.271 20114 ERROR cinder states[state].run_once(lambda: fn(self, *args, **kwargs),
2017-08-03 11:37:13.271 20114 ERROR cinder File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 452, in wait
2017-08-03 11:37:13.271 20114 ERROR cinder self.listener.cleanup()
2017-08-03 11:37:13.271 20114 ERROR cinder AttributeError: 'NoneType' object has no attribute 'cleanup'
2017-08-03 11:37:13.271 20114 ERROR cinder

Tags: horizon
Revision history for this message
musharani (musharani) wrote :
musharani (musharani)
description: updated
musharani (musharani)
Changed in juniperopenstack:
milestone: none → r4.0.1.0
Revision history for this message
Rudra Rugge (rrugge) wrote :

Can you try with the latest build.

Changed in juniperopenstack:
status: New → Incomplete
assignee: nobody → musharani (musharani)
Revision history for this message
Vedamurthy Joshi (vedujoshi) wrote :

In Horizon, while launching VMs, Volumes is enabled by default...so users may end up hitting this when no storage is provisioned

This bug can be closed

Revision history for this message
musharani (musharani) wrote :

As per Vedu's comment I disbabled the volumes while creating vm in script itself. Hence I closed the bug.

https://review.opencontrail.org/#/c/35339/ - master
https://review.opencontrail.org/#/c/35338/ - R4.0

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.