boot vm failed with --block-device set as attach volume failed during boot

Bug #1416269 reported by Jerry Cai
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
Undecided
Prachi Khatri

Bug Description

When attach a existing vm during booting vm by following cmd:
nova boot --flavor small --image c7e8738b-c2c6-4365-a305-040bfbd1b514 --nic net-id=abfe3157-d23c-4d15-a7ff-80429a7d9b27 --block-device source=volume,dest=volume,bootindex=1,shutdown=remove,id=ca383135-d619-43c2-8826-95ae4d475581 test11

It failed in "block device mapping" phase, error from nova is:
2015-01-30 01:59:14.030 28957 ERROR nova.compute.manager [-] [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] Instance failed block device setup
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] Traceback (most recent call last):
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1856, in _prep_block_device
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] do_check_attach=do_check_attach) +
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 407, in attach_block_devices
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] map(_log_and_attach, block_device_mapping)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 405, in _log_and_attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] bdm.attach(*attach_args, **attach_kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 48, in wrapped
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] ret_val = method(obj, context, *args, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 272, in attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] self['mount_device'], mode=mode)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 213, in wrapper
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] res = method(self, ctx, volume_id, *args, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 359, in attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] mountpoint, mode=mode)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 326, in attach
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] 'mode': mode})
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 311, in _action
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] return self.api.client.post(url, body=body)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 91, in post
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] return self._cs_request(url, 'POST', **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 85, in _cs_request
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] return self.request(url, method, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 80, in request
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] return super(SessionClient, self).request(*args, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 158, in request
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 88, in request
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] return self.session.request(url, method, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 318, in inner
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] return func(*args, **kwargs)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 354, in request
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] raise exceptions.from_response(resp, method, url)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] InternalServerError: Internal Server Error (HTTP 500) (Request-ID: req-5247adeb-1053-4b62-972a-84b40c66c455)
2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c]
2015-01-30 01:59:14.053 28957 ERROR nova.compute.manager [-] [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] Failure prepping block device
2015-01-30 01:59:14.053 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] Traceback (most recent call last):
2015-01-30 01:59:14.053 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2324, in _build_resources
2015-01-30 01:59:14.053 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] block_device_mapping)
2015-01-30 01:59:14.053 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1884, in _prep_block_device
2015-01-30 01:59:14.053 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] raise exception.InvalidBDM()
2015-01-30 01:59:14.053 28957 TRACE nova.compute.manager [instance: 5456c257-9dda-4ce3-b16d-112ac55e498c] InvalidBDM: Block Device Mapping is Invalid.

From cinder is:
2015-01-30 01:39:23.456 17583 ERROR cinder.api.middleware.fault [req-5247adeb-1053-4b62-972a-84b40c66c455 039935838396418e9182b3e48360f6df 952d9c8024594f48baca9b24cb2562a6 - - -] Caught error: Timed out waiting for a reply to message ID 68416fdbc88e4ccf894ee6f3c5d2ad5b
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault Traceback (most recent call last):
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/middleware/fault.py", line 76, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return req.get_response(self.application)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault application, catch_exc_info=False)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in call_application
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault app_iter = application(self.environ, start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return resp(environ, start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py", line 748, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return self._call_app(env, start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py", line 684, in _call_app
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return self._app(env, _fake_start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return resp(environ, start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return resp(environ, start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/routes/middleware.py", line 131, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault response = self.app(environ, start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return resp(environ, start_response)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault resp = self.call_func(req, *args, **self.kwargs)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return self.func(req, *args, **kwargs)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py", line 978, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault content_type, body, accept)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py", line 1026, in _process_stack
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault action_result = self.dispatch(meth, request, action_args)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py", line 1106, in dispatch
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py", line 978, in __call__
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return method(req=request, **action_args)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/contrib/volume_actions.py", line 118, in _attach
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault instance_uuid, host_name, mountpoint, mode)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/volume/api.py", line 87, in wrapped
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault return func(self, context, target_obj, *args, **kwargs)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/volume/api.py", line 500, in attach
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault mode)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/volume/rpcapi.py", line 147, in attach_volume
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault mode=mode)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in call
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault retry=self.retry)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault timeout=timeout, retry=retry)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 404, in send
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault retry=retry)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 393, in _send
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault result = self._waiter.wait(msg_id, timeout)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 281, in wait
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault reply, ending = self._poll_connection(msg_id, timeout)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 231, in _poll_connection
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault % msg_id)
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault MessagingTimeout: Timed out waiting for a reply to message ID 68416fdbc88e4ccf894ee6f3c5d2ad5b
2015-01-30 01:39:23.456 17583 TRACE cinder.api.middleware.fault
2015-01-30 01:39:23.460 17583 INFO cinder.api.middleware.fault [req-5247adeb-1053-4b62-972a-84b40c66c455 039935838396418e9182b3e48360f6df 952d9c8024594f48baca9b24cb2562a6 - - -] http://9.114.192.160:8776/v2/952d9c8024594f48baca9b24cb2562a6/volumes/ca383135-d619-43c2-8826-95ae4d475581/action returned with HTTP 500

I guess the problem may go here:
/usr/lib/python2.7/site-packages/nova/virt/block_device.py line 270:
        if volume['attach_status'] == "detached":
            volume_api.attach(context, volume_id, instance.uuid,
                              self['mount_device'], mode=mode)

I also tried prepare a vm and a volume, and attach by using nova volume-attach cmd, this code worked but it failed when attach during booting by nova boot --block-device cmd.
Please to help check on this case.

Changed in nova:
status: New → Confirmed
Changed in nova:
assignee: nobody → Prachi Khatri (prachikhatri-18)
Revision history for this message
Prachi Khatri (prachikhatri-18) wrote :
Download full text (7.6 KiB)

I am trying to reproduce the bug however VM is booted with attached volume and its up and running.

Below are the output for the executed command:

root@node-6:~# cinder list
+--------------------------------------+-----------+---------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+---------------------+------+-------------+----------+--------------------------------------+
| 04f70fa5-1e3b-440e-be12-96ec6be7264d | in-use | my-new-volume | 8 | None | false | bc48b561-600f-4c15-93b2-99965255d577 |
| 4ea7f762-be57-4059-a292-05dcd1ae07f2 | available | test_vol_1 | 20 | None | false | |
| 4ee418b3-9c17-4ac2-a667-dea10da9ab0a | in-use | test_vol | 20 | None | false | 6598e002-15cb-4c08-8ae1-62b9ef1e2789 |
| a23962b6-96f9-4e89-9b15-0953867e6a26 | available | hai-volume | 1 | None | false | |
| cc8a57f2-e4eb-4252-8395-38c61fe0eb68 | available | ecs-atlas-data-sp09 | 20 | None | false | |
+--------------------------------------+-----------+---------------------+------+-------------+----------+--------------------------------------+
root@node-6:~# nova boot --flavor m1.small --image f3e0b962-f5c7-401a-8933-1dc6044620e9 --nic net-id=4b55a222-7167-4e21-8dbc-a8f2a7a2d139 --block-device source=volume,dest=volume,bootindex=1,shutdown=remove,id=4ea7f762-be57-4059-a292-05dcd1ae07f2 test_1
+--------------------------------------+-------------------------------------------------------+
| Property | Value |
+--------------------------------------+-------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000074 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
|...

Read more...

Revision history for this message
Jerry Cai (caimin) wrote :

Hi, Prachi, thanks for feedback, I'm using the lastest nova release (kilo) to test this case. Could you verify it on kilo release? thanks

Revision history for this message
Prachi Khatri (prachikhatri-18) wrote :
Download full text (8.9 KiB)

Hi Jerry, thanks for your inputs. I have tried on kilo version and able to see the "block device mapping is invalid" error while booting vm with attaching cinder volume.I am copying the log, got with cinder, however there is no error logs with nova.

ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ nova boot --flavor m1.small --image 7146174f-9d25-45e7-a646-ff74b23e7ead --nic net-id=4545a373-43ff-4991-b5cd-d0ba5c352ab5 --block-device source=volume,dest=volume,bootindex=1,shutdown=remove,id=084eedd4-421d-4dbc-a674-1e2405953b58 test_with_cinder --poll
+--------------------------------------+------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | c8dPc3NzmBPf |
| config_drive | |
| created | 2015-02-09T08:48:15Z |
| flavor | m1.small (2) |
| hostId | |
| id | dd1907b7-d8b5-4a38-94d3-01ab05bb89af |
| image | cirros-0.3.2-x86_64-disk1 (7146174f-9d25-45e7-a646-ff74b23e7ead) |
| key_name | - |
| metadata | {} |
| name | test_with_cinder |
| os-extended-volumes:volumes_attached | [{"id": "084eedd4-421d-4dbc-a674-1e2405953b58"}] |
| progress | 0 |
| security_groups | default ...

Read more...

Revision history for this message
Mark Goddard (mgoddard) wrote :

Also seen on a Kilo install using a modified nova & ironic with block device support (which admittedly may be interfering, I'm not sure).

I found that using the legacy --block-device-mapping worked, but --block-device did not.

nova boot --image img --block-device source=volume,dest=volume,id=23d3fd8d-7c54-4c23-afe0-b3361c9c5f74,shutdown=preserve,bootindex=1 --key-name default --flavor ironic_flavor instance
ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the instance and image/block device mapping combination is not valid. (HTTP 400)

I tried bootindex as -1, 0, 1 and absent with the same result.

The exception seems to be thrown at https://github.com/openstack/nova/blob/stable/kilo/nova/compute/api.py#L1278. It's checking that at least one block device has boot index 0 and that all block devices are strictly ascending in boot index.

Adding some debug logging to that method to print the BDMs, I found the following results:

• If you create an instance with just an image, you get one block device, source=image,dest=local,image_id=<ID>,bootindex=0. This is not passed by the nova client (use --debug), so is being generated within Nova.
• If you create an instance with an image and --block-device-mapping, you get the above block device, plus another for the volume.
• If you create an instance with an image and --block-device, you just get the volume block-device. This causes the check for a bootindex 0 device to fail.

Revision history for this message
Ladislav Smola (lsmola) wrote :

on kilo env I am seeing

nova boot --flavor m1.small --image 0569ca47-5988-4639-b45d-9bec9822d89f --nic net-id=5990be97-e21d-4b4e-b2b3-465062703aaa --block-device source=volume,dest=volume,bootindex=1,shutdown=remove,id=901761be-d3c5-4110-9310-75463a04ccc6 test_1

ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the instance and image/block device mapping combination is not valid. (HTTP 400) (Request-ID: req-00c627a7-b7bf-48ee-bd9d-7303653b39e1)

Revision history for this message
Sujitha (sujitha-neti) wrote :

This was the kilo fix:

https://review.openstack.org/#/c/174060/ for bug 1433609. I'm not able to reproduce this bug on stable/kilo.

I'm going to duplicate this bug against bug 1433609 - if it's not the same issue, please re-open and explain why.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.