Rbd backend doesn't support disk IO qos

Bug #1507504 reported by Andriy Kurilin
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Invalid
High
MOS Nova
6.1.x
Fix Released
High
MOS Maintenance
7.0.x
Fix Released
High
Denis Meltsaykin
8.0.x
Invalid
High
MOS Nova

Bug Description

Honor https://wiki.openstack.org/wiki/InstanceResourceQuota#IO_limits by propagating
the extra_specs also in the RBD sub-class.

Upstream bug-report: https://bugs.launchpad.net/nova/+bug/1405367

Revision history for this message
Andriy Kurilin (andreykurilin) wrote :
Changed in mos:
assignee: Andrey Kurilin (andreykurilin) → nobody
status: In Progress → New
description: updated
tags: added: customer-found
Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Fix merged to openstack/nova (openstack-ci/fuel-6.1/2014.2)

Reviewed: https://review.fuel-infra.org/12911
Submitter: Vitaly Sedelnik <email address hidden>
Branch: openstack-ci/fuel-6.1/2014.2

Commit: cbf71d63b8bb658c0bbf20cc34ced6e08d48ff68
Author: StephenSun <email address hidden>
Date: Tue Oct 20 10:33:12 2015

libvirt: fix disk I/O QOS support with RBD

The disk I/O QOS settings were set in the libvirt_info
method of the Image class. While this is fine for most
subclasses, the Rbd subclass overrides this method and
so was loosing the QOS settings. Move the setting of
QOS parameters into a separate method so it can be
called in all places that need it. For added fun the
commit 86e6f34 which added QOS settings originally never
added any unit tests to cover its operation.

Conflicts:
        nova/tests/unit/virt/libvirt/test_imagebackend.py

Co-authored: Daniel P. Berrange <email address hidden>
Closes-Bug: #1507504
Change-Id: Ibb3a4dff8996c29ef921be7c56648a442bbb89a2
(cherry picked from commit fdb7b030abbff85d19581c3f2b0ba68683fd8d6f)

Denis Puchkin (dpuchkin)
tags: added: on-verification
Revision history for this message
Denis Puchkin (dpuchkin) wrote :

Verifeid on 6.1 on Ubuntu.
Packages:
nova packages
Version:
2014.2.2-1~u14.04+mos38

tags: removed: on-verification
Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :
Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Fix merged to openstack/nova (openstack-ci/fuel-7.0/2015.1.0)

Reviewed: https://review.fuel-infra.org/19939
Submitter: Denis V. Meltsaykin <email address hidden>
Branch: openstack-ci/fuel-7.0/2015.1.0

Commit: 4c7f793d2915d4aefabec61d62ec72e675fa54cb
Author: StephenSun <email address hidden>
Date: Thu Apr 21 16:33:16 2016

libvirt: fix disk I/O QOS support with RBD

The disk I/O QOS settings were set in the libvirt_info
method of the Image class. While this is fine for most
subclasses, the Rbd subclass overrides this method and
so was loosing the QOS settings. Move the setting of
QOS parameters into a separate method so it can be
called in all places that need it. For added fun the
commit 86e6f34 which added QOS settings originally never
added any unit tests to cover its operation.

Co-authored: Daniel P. Berrange <email address hidden>
Closes-bug: #1507504

(cherry-picked from fdb7b030abbff85d19581c3f2b0ba68683fd8d6f)

Conflicts:
 nova/tests/unit/virt/libvirt/test_imagebackend.py

Change-Id: Ibb3a4dff8996c29ef921be7c56648a442bbb89a2

tags: added: on-verification
Revision history for this message
TatyanaGladysheva (tgladysheva) wrote :

Verified on MOS 7.0 + MU4 updates.

Steps to verify:
1. Deploy env with Ceph (in my case: 3 controllers + 2 computes with Ceph-OSD)

2. On controller node set I/O limits to m1.tiny flavor:
nova flavor-key m1.tiny set quota:disk_read_bytes_sec=10240000
nova flavor-key m1.tiny set quota:disk_write_bytes_sec=10240000

3. Check that limits were applied to flavor m1.tiny:
root@node-3:~# nova flavor-show m1.tiny | grep extra_specs
| extra_specs | {"quota:disk_read_bytes_sec": "10240000", "quota:disk_write_bytes_sec": "10240000"} |

4. Create instance with this flavor:
root@node-3:~# nova boot --image TestVM --flavor m1.tiny --nic net-id=<net-id> test

5. On compute node (on which created instance is hosted) check instance xml file (located in /etc/libvirt/qemu).
This file contains section of rbd disk with I/O limits:
      <iotune>
        <read_bytes_sec>10240000</read_bytes_sec>
        <write_bytes_sec>10240000</write_bytes_sec>
      </iotune>

6. root@node-4:~# ps axu | grep qemu
In the output of process pay attention on the following:
-drive file=rbd:compute/d6632b79-4010-4ea7-b473-b6f95e2371d6_disk:id=compute:key=AQDs3E5XeNpIHhAA8nwV/llxt9oTatGBLGEHMQ==:auth_supported=cephx\;none:mon_host=10.109.17.3\:6789\;10.109.17.4\:6789\;10.109.17.5\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=writeback,bps_rd=10240000,bps_wr=10240000

So, I/O limits are present in process' arguments: bps_rd=10240000,bps_wr=10240000.

Also all steps are performed successfully for other I/O limits:
disk_read_iops_sec
disk_write_iops_sec
disk_total_bytes_sec
disk_total_iops_sec

tags: removed: on-verification
tags: added: on-automation
Revision history for this message
TatyanaGladysheva (tgladysheva) wrote :
tags: removed: on-automation
tags: added: covered-automated-test
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.