ceph caching enable
Bug #1361391 reported by
Andrey Grebennikov
This bug affects 3 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
High
|
Stanislav Makar |
Bug Description
It would be nice to have caching turned on for Ceph in case we place the VMs into.
We actually need the options:
[client]
rbd cache = true
rbd cache writethrough until flush = true
to be added to ceph.conf;
we need the option:
disk_cachemodes
to be added to nova.conf on the compute nodes.
Changed in fuel: | |
importance: | Undecided → Medium |
assignee: | nobody → Fuel Library Team (fuel-library) |
milestone: | none → 6.0 |
Changed in fuel: | |
status: | New → Triaged |
tags: | added: ceph |
Changed in fuel: | |
assignee: | Fuel Library Team (fuel-library) → Stanislav Makar (smakar) |
Changed in fuel: | |
status: | Triaged → In Progress |
To post a comment you must log in.
Tried this out on current master 5.1 build (ceph version 0.80.4 (7c241cfaa6c8c0 68bc9da8578ca00 b9f4fc7567f) ) 26_21-42- 16", "ostf_sha": "4dcd99cc4bfa19 f52d4b87ed321eb 84ff03844da" , "build_number": "9", "auth_required": true, "api": "1.0", "nailgun_sha": "04e3f9d9ad3140 cd63a9b5a1a302c 03ebe64fd0a" , "production": "docker", "fuelmain_sha": "74a97d500bb2fe 9528f99771ccc2e c657ae3f76e" , "astute_sha": "bc60b7d027ab24 4039f48c505ac52 ab8eb0a990c" , "feature_groups": ["experimental"], "release": "5.1", "fuellib_sha": "1e43ca00fe4fb0 5a485de4bea55bd 00d16bda532" }
{"build_id": "2014-08-
After changing ceph.conf on all nodes & nova.conf on computes no instances can spawn as they are stuck on 'BUILD: Spawning' for what appears to be forever.
ceph -w shows that the image was cloned from the layered parent in a short time;
2014-08-27 01:10:47.164236 mon.0 [INF] pgmap v880: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 0 B/s rd, 2235 B/s wr, 0 op/s
2014-08-27 01:10:57.208402 mon.0 [INF] pgmap v881: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 0 B/s rd, 2142 B/s wr, 0 op/s
2014-08-27 01:11:06.915590 mon.0 [INF] pgmap v882: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 0 B/s rd, 829 B/s wr, 0 op/s
2014-08-27 01:11:07.934096 mon.0 [INF] pgmap v883: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 9738 B/s rd, 2291 B/s wr, 13 op/s
2014-08-27 01:11:11.921437 mon.0 [INF] pgmap v884: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 22113 B/s rd, 4299 B/s wr, 31 op/s
2014-08-27 01:11:12.936762 mon.0 [INF] pgmap v885: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 16381 B/s rd, 2661 B/s wr, 23 op/s
2014-08-27 01:11:17.212523 mon.0 [INF] pgmap v886: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 14318 B/s rd, 3095 B/s wr, 19 op/s
2014-08-27 01:11:37.214665 mon.0 [INF] pgmap v887: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 0 B/s rd, 1855 B/s wr, 0 op/s
2014-08-27 01:11:42.214276 mon.0 [INF] pgmap v888: 7360 pgs: 7360 active+clean; 14419 MB data, 60243 MB used, 7082 GB / 7141 GB avail; 0 B/s rd, 2293 B/s wr, 0 op/s
It looks as if the disk was created correctly;
root@node-1:~# rbd info compute/ 2f0910b2- 585d-4023- b6b7-1c029a992e a8_disk 585d-4023- b6b7-1c029a992e a8_disk' :
block_ name_prefix: rbd_data. 16952e13c42c 82e58933- 0127-4665- 8bc7-8e54279ffb d0@snap 82e58933- 0127-4665- 8bc7-8e54279ffb d0@snap 147cfd3c- 712a-4f98- 94c7-d78a2059a5 64_disk 2f0910b2- 585d-4023- b6b7-1c029a992e a8_disk
rbd image '2f0910b2-
size 20480 MB in 2560 objects
order 23 (8192 kB objects)
format: 2
features: layering
parent: images/
overlap: 4096 MB
root@node-1:~# rbd children images/
compute/
compute/
I've verified data is correctly on the spawned image as well;
root@node-1:~# rbd map compute/ 2f0910b2- 585d-4023- b6b7-1c029a992e a8_disk compute/ 2f0910b2- 585d-4023- b6b7-1c029a992e a8_disk- part1 /mnt/linux
root@node-1:~# mkdir /mnt/linux
root@node-1:~# mount /dev/rbd/
root@n...