Mounting the cinder storage dashboard shows that the mount was successful but it turns out that the local store is the one mounted
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Zun |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
1:Dashboard: Error: Unable to retrieve attachment information.
2:My configuration:
vim /etc/zun/zun.conf
[volume]
driver = cinder
volume_dir = /var/lib/zun/mnt
3:Once the container is up, a directory is generated at /var/lib/zun/mnt/
/var/lib/
4:Container after deleting the/var/
hezhiqiang (hezhiqiang) wrote : | #1 |
hezhiqiang (hezhiqiang) wrote : | #2 |
The data for the mounted directory exists:
hongbin (hongbin034) wrote : | #3 |
Hi @mustang,
Two questions from me:
* How did you deploy Zun (i.e. kolla, devstack, or manual install)?
* Which version of Zun you were using (i.e. master, stable/rocky, or stable/queens)?
hezhiqiang (hezhiqiang) wrote : | #4 |
manual install && master
version:
zun 2.1.1.dev203
python-zunclient 3.0.0
hezhiqiang (hezhiqiang) wrote : | #5 |
The data is not stored in the cinder volume at all,Stored in the/var/lib/zun/MNT / 8 bb97e08 dd28-43 e8 - a9df - cbad4cf924d7
hongbin (hongbin034) wrote : | #6 |
Could you check if '/var/lib/
In particular, if you type:
$ mount | grep 8bb97e08-
You should see something like:
/dev/sdc on /opt/stack/
That means the behavior is correct. The cinder volume device 'dev/xxx' is mounted to host's path '/opt/stack/
hezhiqiang (hezhiqiang) wrote : | #7 |
hezhiqiang (hezhiqiang) wrote : | #8 |
hezhiqiang (hezhiqiang) wrote : | #9 |
openstack appcontainer run --name container --net network=
hongbin (hongbin034) wrote : | #10 |
Thanks for providing the information.
I saw you were using the kata runtime. To further locate the problem, could you run the same command with the default (runc) runtime? or the error only happened on using kata runtime?
In addition, could you provide the zun-compute log?
hezhiqiang (hezhiqiang) wrote : | #11 |
2019-01-28 14:10:40.875 72868 INFO zun.container.
2019-01-28 14:11:15.483 72868 INFO zun.compute.manager [req-aa3d6102-
2019-01-28 14:11:15.602 72868 INFO oslo.privsep.daemon [req-aa3d6102-
2019-01-28 14:11:16.873 72868 INFO oslo.privsep.daemon [req-aa3d6102-
2019-01-28 14:11:16.804 69748 INFO oslo.privsep.daemon [-] privsep daemon starting
2019-01-28 14:11:16.809 69748 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2019-01-28 14:11:16.814 69748 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_
2019-01-28 14:11:16.814 69748 INFO oslo.privsep.daemon [-] privsep daemon running as pid 69748
2019-01-28 14:11:21.046 72868 INFO zun.volume.
2019-01-28 14:11:21.048 72868 INFO os_brick.
2019-01-28 14:11:22.016 72868 WARNING os_brick.
2019-01-28 14:11:26.646 72868 INFO zun.volume.
2019-01-28 14:11:26.898 72868 INFO zun.volume.
2019-01-28 14:11:26.948 72868 INFO oslo.privsep.d...
hezhiqiang (hezhiqiang) wrote : | #12 |
hezhiqiang (hezhiqiang) wrote : | #13 |
- 5BEBD41E-FADC-47d5-BED9-D5E9CA631BDF.png Edit (35.1 KiB, image/png)
Refresh the browser and report an error,If I delete the container, I don't get an error
hongbin (hongbin034) wrote : | #14 |
@mustang,
I see the problem now. Since horizon assumes cinder volume must be attached to nova instance, it tried to locate the nova instance from the volume attachment and failed. The error you were seeing is due to a request to locate the nova instance and nova returned a 404 response.
For now, you can disregard the error since it is not a Zun's issue (I will add horizon to this report so that someone from horizon might help fixing the volume panel). I didn't see any error in the zun-compute log so it seems the cinder volume was successfully attached to the container.
If you run a container with a cinder volume bind-mounted, write some data to the volume, then delete and re-created the container with the same volume. Are the data still there. If it is, things are working as expected.
Changed in zun: | |
status: | New → Won't Fix |
lxm (lxm-xupt) wrote : | #15 |
we did the test
1. use follow command to create a container
zun run -i --name container --net network=
2. execute `echo "test" >> /data/test.txt` in the container, then delete the container
3. reuse the command in step 1 to create a new container, and execute `ls /data` in the new container, there was nothing under the `data` folder
hongbin (hongbin034) wrote : | #16 |
This is strange. I will take a closer look.
Changed in zun: | |
status: | Won't Fix → New |
hongbin (hongbin034) wrote : | #17 |
This is exactly what we tested in the CI: https:/
I tried to reproduce the issue locally but no luck. Will give it more thoughts.
hongbin (hongbin034) wrote : | #18 |
Hi @hzq,
The log you provided in comment #11 doesn't contain debug information. Would you turn on the debugging (At /etc/zun/zun.conf, set debug = True under [DEFAULT]) and capture the log again?
hezhiqiang (hezhiqiang) wrote : | #19 |
ok
hezhiqiang (hezhiqiang) wrote : | #20 |
compute node log:
2019-01-31 12:06:19.763 54425 DEBUG oslo_concurrenc
2019-01-31 12:06:19.775 54425 DEBUG zun.image.
2019-01-31 12:06:21.268 54425 DEBUG zun.image.
2019-01-31 12:06:21.281 54425 DEBUG docker.utils.config [req-46a0b894-
2019-01-31 12:06:21.281 54425 DEBUG docker.utils.config [req-46a0b894-
2019-01-31 12:06:21.282 54425 DEBUG docker.utils.config [req-46a0b894-
2019-01-31 12:06:21.282 54425 DEBUG docker.utils.config [req-46a0b894-
2019-01-31 12:06:21.283 54425 DEBUG zun.container.
2019-01-31 12:06:21.286 54425 WARNING zun.compute.manager [req-46a0b894-
hezhiqiang (hezhiqiang) wrote : | #21 |
controller node error log:
019-01-31 12:06:03.174 70855 DEBUG oslo_db.
2019-01-31 12:06:23.411 70846 DEBUG oslo_db.
2019-01-31 12:06:24.724 70850 DEBUG oslo_db.
2019-01-31 12:06:24.749 70847 DEBUG oslo_db.
2019-01-31 12:06:28.850 70862 DEBUG oslo_db.
2019-01-31 12:07:25.580 70861 DEBUG oslo_db.
2019-01-31 12:07:25.582 70858 DEBUG oslo_db.
2019-01-31 12:0...
hongbin (hongbin034) wrote : | #22 |
I couldn't find any information of volume attachment in comment #20. It looks the log is incomplete. Would you double-check?
hongbin (hongbin034) wrote : | #23 |
For example, the log should contain message like "Attach volume to this server successfully"
Akihiro Motoki (amotoki) wrote : | #24 |
It looks like unrelated to horizon. Marking horizon as Invalid.
Changed in horizon: | |
status: | New → Invalid |
hezhiqiang (hezhiqiang) wrote : | #25 |
hezhiqiang (hezhiqiang) wrote : | #26 |
hongbin (hongbin034) wrote : | #27 |
According to the logs, everything seems to be fine. If you type this command before and after creating the container, what did you see:
$ mount | grep zun
hezhiqiang (hezhiqiang) wrote : | #28 |
Does integrating cinder require special configuration
hongbin (hongbin034) wrote : | #29 |
No sure what you mean by "special configuration", but we try to document the configuration steps in the installation guide. If you find anything that is missing, please let us know.
During the cinder volume attachment, what Zun does is to call cinder API to complete the attach workflow. Zun uses os-brick to connect the cinder volume, mount the volume to a path in the compute host, then bind-mount the host path to the container. Pretty much the same as how to attach a cinder volume to a VM.
hezhiqiang (hezhiqiang) wrote : | #30 |
Let me find out why
# pip list | grep -i os-brick
os-brick (2.3.3)
# rpm -qa | grep -i os-brick
python2-
hezhiqiang (hezhiqiang) wrote : | #31 |
I had this problem with an openstack version of queens but then I deployed a set of openstack versions of rocky that didn't have this problem
# mount | grep -i zun
/dev/sdb on /var/lib/
hezhiqiang (hezhiqiang) wrote : | #32 |
The root cause of the problem was found:
I added this configuration in the /etc/systemd/
vim /etc/systemd/
[Service]
PrivateTmp=true
Akihiro Motoki (amotoki) wrote : | #33 |
Per #32 comment, it turns out horizon is not related. I am removing horizon from the affected projects to avoid uninteresting bug email from horizon developer perspective.
no longer affects: | horizon |
hezhiqiang (hezhiqiang) wrote : | #34 |
ok
Changed in zun: | |
status: | New → Fix Released |
Cinder shows mount success