Cannot launch an instance via Horizon and not via microstack command

Bug #1951958 reported by David mourereau
26
This bug affects 4 people
Affects Status Importance Assigned to Milestone
MicroStack
In Progress
Medium
Billy Olsen

Bug Description

Once microstack --devmode installed and init.

I can easily create my instance via microstack launch crrros.

But when using horizon igot the following error:

Error: Failed to perform requested operation on instance "test2", the instance has an error status: Please try again later [Error: Build of instance 3548b840-fa73-430b-85f7-c965364ad2fc aborted: Invalid input received: Invalid image identifier or unable to access requested image. (HTTP 400) (Request-ID: req-421c545d-59ea-4f00-8fa1-ec375962b86c)].

Thank you for your support,

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Not sure what's going on here. Can you provide some of the nova and glance logs involved here?

journalctl -u snap.microstack.nova-compute -u snap.microstack.nova-api -u snap.microstack.glance-api

Additionally, a microstack.openstack image list would be helpful

Changed in microstack:
status: New → Incomplete
Revision history for this message
Lasse Gustafsson (klicken) wrote :
Download full text (38.6 KiB)

I have the exact same problem.
This is the logs from when I try to launch a instance from the dashboard.
Works ok with the cli

+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| a866d0d6-52d7-4cc2-b67d-f2659d9527a3 | cirros | active |
| 615ed9b1-a17c-4d16-bb7c-c6cdc29f0e15 | focal | active |
+--------------------------------------+--------+--------+

Dec 12 18:06:24 nuc-1 nova-api-os-compute[71719]: 2021-12-12 18:06:24.742 71719 INFO nova.osapi_compute.wsgi.server [-] 192.168.72.14,127.0.0.1 "GET /v2.1 HTTP/1.0" status: 200 len: 777 time: 0.0009298
Dec 12 18:06:25 nuc-1 nova-api-os-compute[71719]: 2021-12-12 18:06:25.498 71719 INFO nova.osapi_compute.wsgi.server [req-5c7112ba-b3a3-4d0a-974a-795b9bcebf4b a96ca30e7a794c62a4da99373b6ad0e6 16169bd039ec4df3b8ed9174936df683 - default default] 192.168.72.14,127.0.0.1 "GET /v2.1/os-simple-tenant-usage/16169bd039ec4df3b8ed9174936df683?start=2021-12-11T00:00:00&end=2021-12-12T23:59:59 HTTP/1.0" status: 200 len: 1353 time: 0.6528597
Dec 12 18:06:25 nuc-1 nova-api-os-compute[71719]: 2021-12-12 18:06:25.570 71719 INFO nova.osapi_compute.wsgi.server [req-a8153e70-220d-4b58-99c2-e15103ca57e5 a96ca30e7a794c62a4da99373b6ad0e6 16169bd039ec4df3b8ed9174936df683 - default default] 192.168.72.14,127.0.0.1 "GET /v2.1/os-simple-tenant-usage/16169bd039ec4df3b8ed9174936df683?start=2021-12-11T00:00:00&end=2021-12-12T23:59:59&marker=bdc4b42f-c152-414f-8a87-29aa85f85dce HTTP/1.0" status: 200 len: 415 time: 0.0686216
Dec 12 18:06:26 nuc-1 nova-api-os-compute[71720]: 2021-12-12 18:06:26.346 71720 INFO nova.osapi_compute.wsgi.server [req-8de5f6e2-6dee-4694-a5aa-406acdb6a95f a96ca30e7a794c62a4da99373b6ad0e6 16169bd039ec4df3b8ed9174936df683 - default default] 192.168.72.14,127.0.0.1 "GET /v2.1/limits?reserved=1 HTTP/1.0" status: 200 len: 908 time: 0.5313892
Dec 12 18:06:33 nuc-1 glance-api[71696]: 2021-12-12 18:06:33.424 71696 INFO eventlet.wsgi.server [req-ed41e6ed-e8d6-45f3-8dda-2455ba661db3 a96ca30e7a794c62a4da99373b6ad0e6 16169bd039ec4df3b8ed9174936df683 - default default] 192.168.72.14,127.0.0.1 - - [12/Dec/2021 18:06:33] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.0" 200 2095 0.023446
Dec 12 18:06:33 nuc-1 glance-api[71699]: 2021-12-12 18:06:33.431 71699 INFO eventlet.wsgi.server [req-83dd5fa2-73e8-458f-9fe2-36536da092f5 a96ca30e7a794c62a4da99373b6ad0e6 16169bd039ec4df3b8ed9174936df683 - default default] 192.168.72.14,127.0.0.1 - - [12/Dec/2021 18:06:33] "GET /v2/schemas/image HTTP/1.0" 200 6278 0.003382
Dec 12 18:06:33 nuc-1 glance-api[71696]: 2021-12-12 18:06:33.454 71696 INFO eventlet.wsgi.server [req-60bb9f0f-445f-4051-ba67-233b98037958 a96ca30e7a794c62a4da99373b6ad0e6 16169bd039ec4df3b8ed9174936df683 - default default] 192.168.72.14,127.0.0.1 - - [12/Dec/2021 18:06:33] "GET /v2/images?visibility=community&limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.0" 200 329 0.014500
Dec 12 18:06:33 nuc-1 nova-api-os-compute[71720]: 2021-12-12 18:06:33.805 71720 INFO nova.osapi_compute.wsgi.server [req-88bfcabc-0c4c-40b7-ab85-b243500a58b3 a96ca30e7a794c62a4da99373b6ad0e6 ...

Revision history for this message
Bibmaster (bibmaster) wrote :

Getting the exact same issue. Couldn't launch instance and create volume from Horizon web UI, but using CLI I could do that.

Error: Failed to perform requested operation on instance "12", the instance has an error status: Please try again later [Error: Build of instance 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47 aborted: Invalid input received: Invalid image identifier or unable to access requested image. (HTTP 400) (Request-ID: req-01a25693-8cf4-4ce5-b56b-bcef896088fe)].

CLI:

microstack launch ubuntu_20.04.2 -n ubuntu -f m1.medium
Launching server ...
Allocating floating ip ...
Server ubuntu launched! (status is BUILD)

Access it with `ssh -i /home/bibmaster/snap/microstack/common/.ssh/id_microstack ubuntu@10.20.20.202`
You can also visit the OpenStack dashboard at https://10.20.20.1:443

I had the same problem about one year ago I installed OpenStack and I find the solution and fixed it, now I couldn't remember that solution.

Revision history for this message
Bibmaster (bibmaster) wrote :
Download full text (7.7 KiB)

2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] res = method(self, ctx, *args, **kwargs)
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] res = method(self, ctx, size, *args, **kwargs)
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] item = client.volumes.create(size, **kwargs)
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] return self._create('/volumes', body, 'volume')
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] resp, body = self.api.client.post(url, body=body)
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] return self._cs_request(url, 'POST', **kwargs)
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] return self.request(url, method, **kwargs)
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] File "/snap/microstack/244/lib/python3.8/site-packag>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] raise exceptions.from_response(resp, body)
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] cinderclient.exceptions.BadRequest: Invalid image iden>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47]
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] During handling of the above exception, another except>
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47]
2022-01-13 00:33:15.119 1016 ERROR nova.compute.manager [instance: 6233e0ae-37a2-4bc2-bd9d-c47dff9cce47] Traceback (most recent call last):
2022...

Read more...

Revision history for this message
Billy Olsen (billy-olsen) wrote :

@Bibmaster - when you are creating an instance via Horizon, are you creating the instance with a volume attached? Default option is to create a volume and boot the instance from the volume. However, when creating the instance through the commandline the instance is not created to boot from a volume.

Revision history for this message
Bibmaster (bibmaster) wrote :

@Billy If I try to create a volume from the existing image I'm getting an error too (in web UI)
 There just a string "Unable to create the volume"
but it allows to create an empty volume

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Sorry, I don't think I was clear. When using the Horizon Web Interface to launch an instance, on the source tab there's a dropdown that says "Select Boot Source". Next to it is a toggle that says "Create New Volume". When you launch the instance is the "Create New Volume" selector toggled to Yes or No? You should select No and try it

Revision history for this message
Bibmaster (bibmaster) wrote :

Thank you, I tried this option but in this case the instance unreachable from any device. I think it launches an empty instance and I couldn't reach it using ping or ssh. BTW I'm able to login on instances created from CLI
 On Microstack I unfortunately deleted today without saving configs I was able to launch volumes from images but as I told it didn't work by default too. There was an issue I couldn't remember and seems it's not fixed yet

Revision history for this message
Bibmaster (bibmaster) wrote :

@Billy if you have a chance, I could provide you a direct access to the device using ssh and 443 port to see whats going on there.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

I won't access your device, but you are welcome to reach out in the #openstack-snaps channel on oftc irc for some live chat if that helps.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Another thing to note is that the microstack.launch command will allocate and assign a floating ip to the instance. If the instance is launched via the horizon GUI, you'll need to make sure you assign a floating IP address and use that to access as the default tenant network that it launches on is a private network.

Changed in microstack:
status: Incomplete → Triaged
importance: Undecided → Medium
Revision history for this message
Billy Olsen (billy-olsen) wrote :

I was able to recreate the same symptoms as reported in the description by launching an instance and choosing to create a volume. However, I did not install Microstack with the experimental volume support and so the creation of the volume actually fails.

It turns out that the cinder services are enabled and configured within Keystone catalog, which causes Horizon to offer options around booting from volume etc. I think the crux of the problem here is that the cinder service is enabled by default, and should actually only be enabled when the experimental volume support is enabled.

Alternatively, one could change the default value for LAUNCH_INSTANCE_DEFAULTS to hide the create volume and set the create_volume by default to False (note, REST_API_REQUIRED_SETTINGS is required to have LAUNCH_INSTANCE_DEFAULTS exposed). However, its better to not configure the cinder service when its guaranteed to lead to problems such as this.

Revision history for this message
Bibmaster (bibmaster) wrote :

I reinstalled snap and launched
microstack init --auto --control . Symptoms are the same.
Here is the resume of the issue:
  1. Unable to create instance from the image using create volume option from GUI (getting 400 error during the Block device mapping)
  2. Unable to create volume from image using GUI (getting an error)

Revision history for this message
Billy Olsen (billy-olsen) wrote :

@Bibmaster you do *not* want to use volumes as that is not enabled. The parameters you have provided do not have backing storage support which means the cinder volumes that would be provisioned have no backend to provision from.

To make it a bit easier, let’s just remove the cinder service from your service catalog. To do this,run:

microstack.openstack service delete cinderv2
microstack.openstack service delete cinderv3

Log out of horizon interface
sudo snap restart microstack.horizon-uwsgi

Log back into horizon. There should not be any volume options available.

Now you can create an instance and it should spawn. If you can’t access, you’ll want to check to make sure you provided ssh keys when you launched it and it has a floating up assigned. You access it via the floating up.

Revision history for this message
Bibmaster (bibmaster) wrote :

@Billy Thank you for detailed answer! But my Q is, how did I create volumes from .iso on previous releases of Microstack? Was this option enabled and is it deprecated now? How could I set up some kind of software to enable this option, and what do I need to do? Are there any plans to enable it in the future?
Thank you!

Revision history for this message
Billy Olsen (billy-olsen) wrote :

@Bibmaster - you must have run microstack.init with --setup-loop-based-cinder-lvm-backend. Optionally specifying --loop-device-file-size to specify the size.

e.g. the following creates a 50G loop device and configures Cinder LVM

sudo microstack.init --auto --control --setup-loop-based-cinder-lvm-backend --loop-device-file-size 50

Revision history for this message
Bibmaster (bibmaster) wrote :

@Billy I did it with the exact command you mentioned (I googled it before), but it still fails during the mapping the block device with HTTP 400

here is the cinder-snap.conf

[DEFAULT]
# Set state path to writable directory
state_path = /var/snap/microstack/common/lib

resource_query_filters_file = /snap/microstack/244/etc/cinder/resource_filters.json

# Set volume configuration file storage directory
volumes_dir = /var/snap/microstack/common/lib/volumes

my_ip = 192.168.11.33

rootwrap_config = /var/snap/microstack/common/etc/cinder/rootwrap.conf

enabled_backends = lvm-loop-based-backend
[lvm-loop-based-backend]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
target_helper = lioadm
volume_group = cinder-volumes
volume_backend_name=lvm-loop-based

log_file = /var/snap/microstack/common/log/cinder.log
debug = False

[oslo_concurrency]
# Oslo Concurrency lock path
lock_path = /var/snap/microstack/common/lock

Revision history for this message
Bibmaster (bibmaster) wrote (last edit ):

@Billy
BTW in config file there is a string

# Set volume configuration file storage directory
volumes_dir = /var/snap/microstack/common/lib/volumes
But I don't see this dir ( I created it but it didn't help)

root@openstack:/var/snap/microstack/common/lib# ls -la
total 68
drwxr-xr-x 17 root root 4096 Jan 13 04:06 .
drwxr-xr-x 11 root root 4096 Jan 13 03:55 ..
drwxr-xr-x 3 root root 4096 Jan 13 04:06 external
drwxr-xr-x 2 root root 4096 Jan 13 03:55 groups
drwxr-x--- 2 root root 4096 Jan 13 14:01 images
drwxr-x--- 7 root root 4096 Jan 13 14:02 instances
drwxr-xr-x 3 root root 4096 Jan 13 03:43 libvirt
srw-r--r-- 1 root root 0 Jan 13 03:57 metadata_proxy
drwxr-xr-x 14 root root 4096 Jan 13 03:56 mysql
drwx------ 2 root root 4096 Jan 13 03:45 mysql-files
drwx------ 2 snap_daemon root 4096 Jan 13 14:01 nginx_client_body
drwx------ 2 snap_daemon root 4096 Jan 13 03:46 nginx_fastcgi
drwx------ 2 snap_daemon root 4096 Jan 13 03:46 nginx_proxy
drwx------ 2 snap_daemon root 4096 Jan 13 03:46 nginx_scgi
drwx------ 7 snap_daemon root 4096 Jan 13 04:02 nginx_uwsgi
drwxr-xr-x 2 root root 4096 Jan 13 13:50 ovn-metadata-proxy
drwx------ 3 root root 4096 Jan 13 03:45 rabbitmq
drwxr-xr-x 2 root root 4096 Jan 13 03:55 tmp

Revision history for this message
Bibmaster (bibmaster) wrote :

That's what instance shows if we didn't create drives on it launch

Revision history for this message
Bibmaster (bibmaster) wrote :

Here is the debug on volume creation

Revision history for this message
Billy Olsen (billy-olsen) wrote (last edit ):

With the experimental volume support enabled, this doesn't work due to the certificate not being configured for glance in the cinder service. The following work around should work for now:

$ sudo tee /var/snap/microstack/common/etc/cinder/cinder.conf.d/glance.conf <<EOF
[DEFAULT]
glance_ca_certificates_file = /var/snap/microstack/common/etc/ssl/certs/cacert.pem
EOF
$ sudo snap restart microstack.cinder-{uwsgi,scheduler,volume}

Changed in microstack:
status: Triaged → In Progress
assignee: nobody → Billy Olsen (billy-olsen)
Revision history for this message
Bibmaster (bibmaster) wrote :

Thank you @Billy! This fix works for me!

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to microstack (master)

Fix proposed to branch: master
Review: https://review.opendev.org/c/x/microstack/+/824836

Revision history for this message
Lucca Jiménez Könings (jimkoen) wrote :

For me the bug does not seem to be resolved.
I have followed this report, setting up Microstack with a Cinder Loop Device with

sudo microstack.init --auto --control --setup-loop-based-cinder-lvm-backend --loop-device-file-size 100

and have additionally added the certificates as described in #22.

However, I still cannot start instances with the "Create New Volume" slider enabled.

I can create volumes under Project -> Volumes just fine. But booting instances with an automatically created volume via the slider (see pic) fails with an error in the webui:

Error: Failed to perform requested operation on instance "asd", the instance has an error status: Please try again later [Error: Build of instance 3bd9a8d4-31b5-4516-90e6-bdca98f4b6ce aborted: Invalid input received: Invalid image identifier or unable to access requested image. (HTTP 400) (Request-ID: req-9f773f7d-be0c-48a3-9782-96bde6708f42)].

The error it fails on is the block device mapping step.

Any help would be appreciated. If you tell me where to look, I can provide log-output, screenshots etc.

Revision history for this message
Lucca Jiménez Könings (jimkoen) wrote :

To clarify: I can create volumes that aren't attached to an image. Creating volumes from images is not possible.

Revision history for this message
Lucca Jiménez Könings (jimkoen) wrote :

@billy-olsen I have no experience with Openstack, but are you sure the tee call should look like this?

$ sudo tee /var/snap/microstack/common/etc/cinder/cinder.conf.d/glance.conf.2 <<EOF
[DEFAULT]
glance_ca_certificates_file = /var/snap/microstack/common/etc/ssl/certs/cacert.pem
EOF

Removing the '2' on ".../cinder.conf.d/glance.conf.2" fixed the issue for me:

$ sudo tee /var/snap/microstack/common/etc/cinder/cinder.conf.d/glance.conf <<EOF
[DEFAULT]
glance_ca_certificates_file = /var/snap/microstack/common/etc/ssl/certs/cacert.pem
EOF
$ sudo snap restart microstack.cinder-{uwsgi,scheduler,volume}

Revision history for this message
Billy Olsen (billy-olsen) wrote :

@Lucca - yes, you are correct - apologies. I was running the tests against a secondary file and the .2 should not be there. I will adjust my comment to reflect the correct command

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on microstack (master)

Change abandoned by "Billy Olsen <email address hidden>" on branch: master
Review: https://review.opendev.org/c/x/microstack/+/824836
Reason: in favor of sunbeam microstack

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.