Comment 6 for bug 1507504

Revision history for this message
TatyanaGladysheva (tgladysheva) wrote :

Verified on MOS 7.0 + MU4 updates.

Steps to verify:
1. Deploy env with Ceph (in my case: 3 controllers + 2 computes with Ceph-OSD)

2. On controller node set I/O limits to m1.tiny flavor:
nova flavor-key m1.tiny set quota:disk_read_bytes_sec=10240000
nova flavor-key m1.tiny set quota:disk_write_bytes_sec=10240000

3. Check that limits were applied to flavor m1.tiny:
root@node-3:~# nova flavor-show m1.tiny | grep extra_specs
| extra_specs | {"quota:disk_read_bytes_sec": "10240000", "quota:disk_write_bytes_sec": "10240000"} |

4. Create instance with this flavor:
root@node-3:~# nova boot --image TestVM --flavor m1.tiny --nic net-id=<net-id> test

5. On compute node (on which created instance is hosted) check instance xml file (located in /etc/libvirt/qemu).
This file contains section of rbd disk with I/O limits:
      <iotune>
        <read_bytes_sec>10240000</read_bytes_sec>
        <write_bytes_sec>10240000</write_bytes_sec>
      </iotune>

6. root@node-4:~# ps axu | grep qemu
In the output of process pay attention on the following:
-drive file=rbd:compute/d6632b79-4010-4ea7-b473-b6f95e2371d6_disk:id=compute:key=AQDs3E5XeNpIHhAA8nwV/llxt9oTatGBLGEHMQ==:auth_supported=cephx\;none:mon_host=10.109.17.3\:6789\;10.109.17.4\:6789\;10.109.17.5\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=writeback,bps_rd=10240000,bps_wr=10240000

So, I/O limits are present in process' arguments: bps_rd=10240000,bps_wr=10240000.

Also all steps are performed successfully for other I/O limits:
disk_read_iops_sec
disk_write_iops_sec
disk_total_bytes_sec
disk_total_iops_sec