2014-07-01 22:19:54 |
Mitsuhiro Tanino |
bug |
|
|
added bug |
2014-07-01 22:20:29 |
Mitsuhiro Tanino |
description |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation of qemu-kvm opens device(storage volume) using cache='node'
at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side
.
Therefore, I/O from guest instances are cached at control node even though the compute
node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem
of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue
direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/> .........### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation of qemu-kvm opens device(storage volume) using cache='node'
at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute
node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem
of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue
direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/> .........### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
|
2014-07-01 22:20:47 |
Mitsuhiro Tanino |
description |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation of qemu-kvm opens device(storage volume) using cache='node'
at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute
node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem
of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue
direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/> .........### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using cache='node'
at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute
node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem
of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue
direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/> .........### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
|
2014-07-01 22:21:59 |
Mitsuhiro Tanino |
description |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using cache='node'
at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute
node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem
of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue
direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/> .........### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using cache='node'at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem
of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon
services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
######### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
|
2014-07-01 22:22:04 |
Mitsuhiro Tanino |
cinder: assignee |
|
Mitsuhiro Tanino (mitsuhiro-tanino) |
|
2014-07-01 22:23:00 |
Mitsuhiro Tanino |
description |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using cache='node'at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem
of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon
services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
######### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using cache='node'at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon
services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
######### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
|
2014-07-01 23:03:05 |
Mitsuhiro Tanino |
bug |
|
|
added subscriber Tomoki Sekiyama |
2014-07-03 23:10:38 |
OpenStack Infra |
cinder: status |
New |
In Progress |
|
2014-07-22 02:04:26 |
OpenStack Infra |
cinder: status |
In Progress |
Fix Committed |
|
2014-07-24 12:28:20 |
Russell Bryant |
cinder: status |
Fix Committed |
Fix Released |
|
2014-07-24 12:28:20 |
Russell Bryant |
cinder: milestone |
|
juno-2 |
|
2014-10-16 09:14:36 |
Thierry Carrez |
cinder: milestone |
juno-2 |
2014.2 |
|
2015-07-14 22:01:23 |
Billy Olsen |
bug task added |
|
cinder (Ubuntu) |
|
2015-07-14 22:01:57 |
Billy Olsen |
cinder (Ubuntu): assignee |
|
Billy Olsen (billy-olsen) |
|
2015-07-14 23:08:34 |
Billy Olsen |
description |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using cache='node'at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon
services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
######### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino |
I found a problem which LVMiSCSI driver can't issue direct I/O through tgtd to iscsi volume.
In current implementation, qemu-kvm opens device(storage volume) using cache='node'at nova side, however, tgtd opens device without "--bsoflags direct" at cinder side.
Therefore, I/O from guest instances are cached at control node even though the compute node issues O_DIRECT I/O to iSCSI volume.
As a result, if control node has a crash, cached data will be lost. This causes data lost problem of guest instance.
I will propose a fix of this issue.
Here are test environment and confirmation results.
(1) Control node
- Control node has nova, cinder(c-sch, c-api, c-vol), glance, horizon
services.
- Use LVMiSCSI driver for cinder backend.
(2) Compute node has n-cpu and n-net services.
[Confirmation at compute node]
On a compute node, qemu opens device file using cache='none'. This means instance can issue direct I/O from guest to the device.
[root@compute ~]# cat /etc/libvirt/qemu/instance-0000000b.xml
<domain type='kvm'>
<name>instance-0000000b</name>
<uuid>9e1eb5cc-4c40-4023-bca5-a7d1720c6f51</uuid>
....
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
######### Open the device without cache.
<source dev='/dev/disk/by-path/ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1'/>
<target dev='vdc' bus='virtio'/>
<serial>fdd23217-6e95-4aee-a586-6ef174567ba5</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
....
</domain>
Confirm a file descriptor whether the device is opened with O_DIRECT or not at compute node.
=> qemu Process ID is "24836"
[root@compute ~]# ps uax | grep qemu
root 11421 0.0 0.0 112672 912 pts/6 S+ 17:13 0:00 grep --color=auto qemu
qemu 24836 13.8 16.0 4638484 1312668 ? Sl Jun29 462:12 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name instance-0000000b .....
=> Device file of iscsi cinder volume is "/dev/sde".
[root@compute ~]# ls -la /dev/disk/by-path/
.....
-rw-r--r-- 1 root root 349525333 Jun 27 10:01 ip-10.16.42.67
lrwxrwxrwx 1 root root 9 Jun 30 00:16 ip-10.16.42.67:3260-iscsi-iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5-lun-1 -> ../../sde
=> "fd18" is infomation of /dev/sde
[root@compute ~]# ls -la /proc/24836/fd
total 0
dr-x------ 2 qemu qemu 0 Jun 29 09:40 .
dr-xr-xr-x 9 qemu qemu 0 Jun 29 09:31 ..
.....
lrwx------ 1 qemu qemu 64 Jun 29 09:40 18 -> /dev/sde
=> The flags is "02140002". O_DIRECT flag is "0x40000". This flag is raised at compute node side.
[root@compute ~]# cat /proc/24836/fdinfo/18
pos: 10737418240
flags: 02140002
[Confirmation at control node]
Confirm iscsi target status and exported disk.
Backing store path is /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
[mtanino@control ~]$ sudo tgt-admin -s
.....
Target 3: iqn.2010-10.org.openstack:volume-fdd23217-6e95-4aee-a586-6ef174567ba5
System information:
Driver: iscsi
...
LUN: 1
...
Backing store path: /dev/stack-volumes/volume-fdd23217-6e95-4aee-a586-6ef174567ba5
Backing store flags:
Account information:
ACL information:
ALL
=>Condirm device mapper file of the backing store
[mtanino@control ~]$ ls -la /dev/disk/by-id/ | grep stack
lrwxrwxrwx 1 root root 10 Jun 30 00:16 dm-name-stack--volumes-volume--fdd23217--6e95--4aee--a586--6ef174567ba5 -> ../../dm-0
=> tgtd Process ID is "31010"
[mtanino@control ~]$ ps aux | grep tgtd
root 31010 2.0 0.0 476584 900 ? Ssl Jun30 52:27 /usr/sbin/tgtd -f
=> "fd11" is infomation of /dev/dm-0
[mtanino@control ~]$ sudo ls -la /proc/31010/fd
...
lrwx------ 1 root root 64 Jul 1 16:11 11 -> /dev/dm-0
lrwx------ 1 root root 64 Jul 1 16:11 12 -> /dev/sdb2
...
=> The flags is "0100002". O_DIRECT flag is "0x40000". This flag is not raised at control node side.
[mtanino@control ~]$ sudo cat /proc/31010/fdinfo/11
pos: 0
flags: 0100002
Regards,
Mitsuhiro Tanino
========================================================================
[Impact]
* May see data loss without the ability to use write-through caching
(write-cache off) option instead of write-back (write-cache on)
option for iscsi targets.
[Test Case]
* Configure Cinder to use LVMiSCSIDriver
* Create cinder volume (cinder create --display-name foo 1G)
* Attach volume to nova instance (nova volume-attach my-instance <vol-uuid>)
* Observe the write-cache policy specified per cinder volume (found in)
- /var/lib/cinder/volumes/volume-<uuid>
* Observe above information (detailed by Mitsuhiro)
[Regression Potential]
* Low risk of regression as the feature is enabled through a
configurable option in which default value takes original behavior. |
|
2015-07-14 23:08:43 |
Billy Olsen |
tags |
|
sts |
|
2015-07-14 23:11:45 |
Billy Olsen |
attachment added |
|
Patch for trusty-icehouse version. https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1336568/+attachment/4429015/+files/lp1336568-trusty.debdiff |
|
2015-07-14 23:16:16 |
Billy Olsen |
bug |
|
|
added subscriber Ubuntu Sponsors Team |
2015-07-23 12:47:51 |
Launchpad Janitor |
cinder (Ubuntu): status |
New |
Confirmed |
|
2015-08-04 09:31:01 |
Michael Cunningham |
bug |
|
|
added subscriber Michael Cunningham |
2015-08-12 17:20:24 |
Billy Olsen |
branch linked |
|
lp:~billy-olsen/cinder/icehouse-1336568 |
|
2015-09-02 13:14:38 |
Chris J Arges |
nominated for series |
|
Ubuntu Trusty |
|
2015-09-02 13:14:38 |
Chris J Arges |
bug task added |
|
cinder (Ubuntu Trusty) |
|
2015-09-04 20:39:42 |
Brian Murray |
cinder (Ubuntu Trusty): status |
New |
Fix Committed |
|
2015-09-04 20:39:45 |
Brian Murray |
bug |
|
|
added subscriber Ubuntu Stable Release Updates Team |
2015-09-04 20:39:47 |
Brian Murray |
bug |
|
|
added subscriber SRU Verification |
2015-09-04 20:39:51 |
Brian Murray |
tags |
sts |
sts verification-needed |
|
2015-09-04 20:44:00 |
Brian Murray |
removed subscriber Ubuntu Sponsors Team |
|
|
|
2015-09-14 22:44:01 |
Billy Olsen |
tags |
sts verification-needed |
sts verification-done |
|
2015-09-16 16:10:39 |
Chris J Arges |
cinder (Ubuntu): status |
Confirmed |
Fix Released |
|
2015-09-16 16:11:47 |
Chris J Arges |
cinder (Ubuntu Trusty): assignee |
|
Billy Olsen (billy-olsen) |
|
2015-09-16 16:12:02 |
Chris J Arges |
removed subscriber Ubuntu Stable Release Updates Team |
|
|
|
2015-09-16 16:22:04 |
Launchpad Janitor |
cinder (Ubuntu Trusty): status |
Fix Committed |
Fix Released |
|