Zun

Mounting the cinder storage dashboard shows that the mount was successful but it turns out that the local store is the one mounted

Bug #1813459 reported by hezhiqiang
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Zun
Fix Released
Undecided
Unassigned

Bug Description

1:Dashboard: Error: Unable to retrieve attachment information.

2:My configuration:
vim /etc/zun/zun.conf
[volume]
driver = cinder
volume_dir = /var/lib/zun/mnt

3:Once the container is up, a directory is generated at /var/lib/zun/mnt/
/var/lib/zun/mnt/8bb97e08-dd28-43e8-a9df-cbad4cf924d7

4:Container after deleting the/var/lib/zun/mnt/8bb97e08-dd28-43e8-a9df-cbad4cf924d7 will be deleted

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

Cinder shows mount success

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

The data for the mounted directory exists:/var/lib/zun/mnt/8bb97e08-dd28-43e8-a9df-cbad4cf924d7

Revision history for this message
hongbin (hongbin034) wrote :

Hi @mustang,

Two questions from me:

* How did you deploy Zun (i.e. kolla, devstack, or manual install)?
* Which version of Zun you were using (i.e. master, stable/rocky, or stable/queens)?

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

manual install && master
version:
zun 2.1.1.dev203
python-zunclient 3.0.0

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

The data is not stored in the cinder volume at all,Stored in the/var/lib/zun/MNT / 8 bb97e08 dd28-43 e8 - a9df - cbad4cf924d7

Revision history for this message
hongbin (hongbin034) wrote :

Could you check if '/var/lib/zun/mnt/8bb97e08-dd28-43e8-a9df-cbad4cf924d7' is mounted to the cinder volume device?

In particular, if you type:

$ mount | grep 8bb97e08-dd28-43e8-a9df-cbad4cf924d7

You should see something like:

/dev/sdc on /opt/stack/data/zun/mnt/8bb97e08-dd28-43e8-a9df-cbad4cf924d7 type ext4 (rw,relatime,data=ordered)

That means the behavior is correct. The cinder volume device 'dev/xxx' is mounted to host's path '/opt/stack/data/zun/mnt/xxxx', and that path is bind-mounted into a path inside the container. If things are correct, the data will be stored in the cinder volume.

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

There is no mounted information

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

cinder volume Mount the state

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

openstack appcontainer run --name container --net network=1a4de5c6-6e74-4d31-8c04-cf45f048ae84 --mount source=dev,destination=/data --image-pull-policy ifnotpresent --runtime kata-runtime nginx

Revision history for this message
hongbin (hongbin034) wrote :

Thanks for providing the information.

I saw you were using the kata runtime. To further locate the problem, could you run the same command with the default (runc) runtime? or the error only happened on using kata runtime?

In addition, could you provide the zun-compute log?

Revision history for this message
hezhiqiang (hezhiqiang) wrote :
Download full text (6.2 KiB)

2019-01-28 14:10:40.875 72868 INFO zun.container.os_capability.linux.os_capability_linux [-] The program 'numactl' is not installed.
2019-01-28 14:11:15.483 72868 INFO zun.compute.manager [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Attaching volume a33190ea-7383-4534-8410-b6f5ccdde4ca to compute-3
2019-01-28 14:11:15.602 72868 INFO oslo.privsep.daemon [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Running privsep helper: ['sudo', 'zun-rootwrap', '/etc/zun/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/zun/zun.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpZA8k13/privsep.sock']
2019-01-28 14:11:16.873 72868 INFO oslo.privsep.daemon [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Spawned new privsep daemon via rootwrap
2019-01-28 14:11:16.804 69748 INFO oslo.privsep.daemon [-] privsep daemon starting
2019-01-28 14:11:16.809 69748 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2019-01-28 14:11:16.814 69748 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2019-01-28 14:11:16.814 69748 INFO oslo.privsep.daemon [-] privsep daemon running as pid 69748
2019-01-28 14:11:21.046 72868 INFO zun.volume.cinder_workflow [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Get connection information {u'driver_volume_type': u'iscsi', u'data': {u'access_mode': u'rw', u'target_discovered': False, u'encrypted': False, u'qos_specs': None, u'target_iqn': u'iqn.2010-10.org.openstack:volume-a33190ea-7383-4534-8410-b6f5ccdde4ca', u'target_portal': u'172.31.200.103:3260', u'volume_id': u'a33190ea-7383-4534-8410-b6f5ccdde4ca', u'target_lun': 0, u'auth_password': u'ZexejLsz4NPHFJ3d', u'auth_username': u'TacnHA6xPUn4aU4F6EGU', u'auth_method': u'CHAP'}}
2019-01-28 14:11:21.048 72868 INFO os_brick.initiator.connectors.iscsi [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Trying to connect to iSCSI portal 172.31.200.103:3260
2019-01-28 14:11:22.016 72868 WARNING os_brick.initiator.connectors.iscsi [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions.

2019-01-28 14:11:26.646 72868 INFO zun.volume.cinder_workflow [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Get device_info after connect to volume {'path': u'/dev/sdd', 'scsi_wwn': '36001405a74165072b3f41668ebfa8e51', 'type': 'block'}
2019-01-28 14:11:26.898 72868 INFO zun.volume.cinder_workflow [req-aa3d6102-f450-4874-9f7f-46a4edbdd60f 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Attach volume to this server successfully
2019-01-28 14:11:26.948 72868 INFO oslo.privsep.d...

Read more...

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

 default (runc) runtime

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

Refresh the browser and report an error,If I delete the container, I don't get an error

Revision history for this message
hongbin (hongbin034) wrote :

@mustang,

I see the problem now. Since horizon assumes cinder volume must be attached to nova instance, it tried to locate the nova instance from the volume attachment and failed. The error you were seeing is due to a request to locate the nova instance and nova returned a 404 response.

For now, you can disregard the error since it is not a Zun's issue (I will add horizon to this report so that someone from horizon might help fixing the volume panel). I didn't see any error in the zun-compute log so it seems the cinder volume was successfully attached to the container.

If you run a container with a cinder volume bind-mounted, write some data to the volume, then delete and re-created the container with the same volume. Are the data still there. If it is, things are working as expected.

hongbin (hongbin034)
Changed in zun:
status: New → Won't Fix
Revision history for this message
lxm (lxm-xupt) wrote :

we did the test
1. use follow command to create a container
zun run -i --name container --net network=1a4de5c6-6e74-4d31-8c04-cf45f048ae84 --mount source=dev,destination=/data --image-pull-policy ifnotpresent nginx /bin/bash
2. execute `echo "test" >> /data/test.txt` in the container, then delete the container
3. reuse the command in step 1 to create a new container, and execute `ls /data` in the new container, there was nothing under the `data` folder

Revision history for this message
hongbin (hongbin034) wrote :

This is strange. I will take a closer look.

Changed in zun:
status: Won't Fix → New
Revision history for this message
hongbin (hongbin034) wrote :

This is exactly what we tested in the CI: https://github.com/openstack/zun-tempest-plugin/blob/master/zun_tempest_plugin/tests/tempest/api/test_containers.py#L357 , so the CI doesn't have this issue.

I tried to reproduce the issue locally but no luck. Will give it more thoughts.

Revision history for this message
hongbin (hongbin034) wrote :

Hi @hzq,

The log you provided in comment #11 doesn't contain debug information. Would you turn on the debugging (At /etc/zun/zun.conf, set debug = True under [DEFAULT]) and capture the log again?

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

ok

Revision history for this message
hezhiqiang (hezhiqiang) wrote :
Download full text (20.7 KiB)

compute node log:
2019-01-31 12:06:19.763 54425 DEBUG oslo_concurrency.lockutils [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Lock "compute_resources" released by "zun.compute.compute_node_tracker.container_claim" :: held 0.041s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:339
2019-01-31 12:06:19.775 54425 DEBUG zun.image.glance.driver [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Searching for image nginx locally _search_image_on_host /usr/lib/python2.7/site-packages/zun/image/glance/driver.py:39
2019-01-31 12:06:21.268 54425 DEBUG zun.image.glance.utils [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Found matches [{u'status': u'active', u'tags': [], u'container_format': u'docker', u'min_ram': 0, u'updated_at': u'2018-04-20T08:16:36Z', u'visibility': u'public', u'owner': u'9d854a5d11884c84bfd5fdce35eeaae4', u'file': u'/v2/images/a325694d-4df6-4505-bade-d8833a9c7d5e/file', u'min_disk': 0, u'virtual_size': None, u'id': u'a325694d-4df6-4505-bade-d8833a9c7d5e', u'size': 112667648, u'name': u'nginx', u'checksum': u'6173b60a2775bac70b241d9e22f94ca2', u'created_at': u'2018-04-20T08:16:35Z', u'disk_format': u'raw', u'protected': False, u'schema': u'/v2/schemas/image'}] find_image /usr/lib/python2.7/site-packages/zun/image/glance/utils.py:39
2019-01-31 12:06:21.281 54425 DEBUG docker.utils.config [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Trying paths: ['/var/lib/zun/.docker/config.json', '/var/lib/zun/.dockercfg'] find_config_file /usr/lib/python2.7/site-packages/docker/utils/config.py:21
2019-01-31 12:06:21.281 54425 DEBUG docker.utils.config [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] No config file found find_config_file /usr/lib/python2.7/site-packages/docker/utils/config.py:28
2019-01-31 12:06:21.282 54425 DEBUG docker.utils.config [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Trying paths: ['/var/lib/zun/.docker/config.json', '/var/lib/zun/.dockercfg'] find_config_file /usr/lib/python2.7/site-packages/docker/utils/config.py:21
2019-01-31 12:06:21.282 54425 DEBUG docker.utils.config [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] No config file found find_config_file /usr/lib/python2.7/site-packages/docker/utils/config.py:28
2019-01-31 12:06:21.283 54425 DEBUG zun.container.docker.driver [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] Reading local tar image /var/lib/zun/images/a325694d-4df6-4505-bade-d8833a9c7d5e.tar read_tar_image /usr/lib/python2.7/site-packages/zun/container/docker/driver.py:247
2019-01-31 12:06:21.286 54425 WARNING zun.compute.manager [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5...

Revision history for this message
hezhiqiang (hezhiqiang) wrote :
Download full text (7.1 KiB)

controller node error log:
019-01-31 12:06:03.174 70855 DEBUG oslo_db.sqlalchemy.engines [req-46a0b894-19ce-47bb-8246-47892fa63af1 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-01-31 12:06:23.411 70846 DEBUG oslo_db.sqlalchemy.engines [req-9b243c7b-195c-469a-a694-e1921defaa10 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-01-31 12:06:24.724 70850 DEBUG oslo_db.sqlalchemy.engines [req-f9afb6c5-c4e5-4c3b-9bf2-31a70d5bfe3c 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-01-31 12:06:24.749 70847 DEBUG oslo_db.sqlalchemy.engines [req-db1e716c-f97e-41cf-b8e7-9ae2c79f3087 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-01-31 12:06:28.850 70862 DEBUG oslo_db.sqlalchemy.engines [req-cc230d2b-dcd2-4d5f-8b44-3946ff3c7050 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-01-31 12:07:25.580 70861 DEBUG oslo_db.sqlalchemy.engines [req-3881b224-9b0d-4f0f-9ad1-2de13f54494e 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-01-31 12:07:25.582 70858 DEBUG oslo_db.sqlalchemy.engines [req-f4df42f9-0561-4401-946b-2a8ca05a21ac 3740ad4ed7ed40d68719c94f26fd0c5e 9d854a5d11884c84bfd5fdce35eeaae4 default - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-01-31 12:0...

Read more...

Revision history for this message
hongbin (hongbin034) wrote :

I couldn't find any information of volume attachment in comment #20. It looks the log is incomplete. Would you double-check?

Revision history for this message
hongbin (hongbin034) wrote :

For example, the log should contain message like "Attach volume to this server successfully"

Revision history for this message
Akihiro Motoki (amotoki) wrote :

It looks like unrelated to horizon. Marking horizon as Invalid.

Changed in horizon:
status: New → Invalid
Revision history for this message
hezhiqiang (hezhiqiang) wrote :

controller zun-api logs

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

compute zun-compute log

Revision history for this message
hongbin (hongbin034) wrote :

According to the logs, everything seems to be fine. If you type this command before and after creating the container, what did you see:

  $ mount | grep zun

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

Does integrating cinder require special configuration

Revision history for this message
hongbin (hongbin034) wrote :

No sure what you mean by "special configuration", but we try to document the configuration steps in the installation guide. If you find anything that is missing, please let us know.

During the cinder volume attachment, what Zun does is to call cinder API to complete the attach workflow. Zun uses os-brick to connect the cinder volume, mount the volume to a path in the compute host, then bind-mount the host path to the container. Pretty much the same as how to attach a cinder volume to a VM.

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

Let me find out why

# pip list | grep -i os-brick
os-brick (2.3.3)

# rpm -qa | grep -i os-brick
python2-os-brick-2.3.3-1.el7.noarch

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

I had this problem with an openstack version of queens but then I deployed a set of openstack versions of rocky that didn't have this problem

# mount | grep -i zun
/dev/sdb on /var/lib/zun/mnt/f3bb0e67-2eb0-406d-981f-2b66ea7a827f type ext4 (rw,relatime,stripe=16,data=ordered)

Revision history for this message
hezhiqiang (hezhiqiang) wrote :

The root cause of the problem was found:
I added this configuration in the /etc/systemd/system/zun-compute Service service configuration file

vim /etc/systemd/system/zun-compute.service
[Service]
PrivateTmp=true

Revision history for this message
Akihiro Motoki (amotoki) wrote :

Per #32 comment, it turns out horizon is not related. I am removing horizon from the affected projects to avoid uninteresting bug email from horizon developer perspective.

no longer affects: horizon
Revision history for this message
hezhiqiang (hezhiqiang) wrote :

ok

Changed in zun:
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.