Rally tests with snapshots failed at Scale

Bug #1408632 reported by Ivan Kolodyazhny
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Fix Committed
High
Anton Arefiev
6.0.x
Fix Released
High
Anton Arefiev
6.1.x
Fix Released
High
Anton Arefiev

Bug Description

Env:
CentOS,
  release: "6.0.1"
  api: "1.0"

create-and-attach-volume [864 iterations, 5 threads] - failure
create-snapshot-and-attach-volume [864 iterations, 5 threads] - failure
create-and-delete-snapshot [1728 iterations, 5 threads] - failure
create_nested_snapshots_and_attach_volume [384 iterations, 5 threads] - failure

Some logs:
http://paste.openstack.org/show/155970/
http://paste.openstack.org/show/155979/

Tags: cinder
Ivan Kolodyazhny (e0ne)
Changed in mos:
status: New → Confirmed
Ivan Kolodyazhny (e0ne)
Changed in mos:
status: Confirmed → In Progress
Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

Could be related bug in upstream https://bugs.launchpad.net/cinder/+bug/1373513

Revision history for this message
Leontii Istomin (listomin) wrote :
Download full text (4.3 KiB)

I've reproduced the issue

[root@fuel dump]# fuel --fuel-version
api: '1.0'
astute_sha: f7cda2171b0b677dfaeb59693d980a2d3ee4c3e0
auth_required: true
build_id: 2015-01-23_08-15-08
build_number: '43'
feature_groups:
- mirantis
fuellib_sha: 9aa913096fb93ea4847ee14bfaf33597326886f3
fuelmain_sha: 1ee1766a51bdb5bed75d5c2efdcaaa318118e439
nailgun_sha: 5c4d298bd9a702aafea486a9ea7a013cb64190ff
ostf_sha: 3b57985d4d2155510894a1f6d03b478b201f7780
production: docker
release: 6.0.1
release_versions:
  2014.2-6.0.1:
    VERSION:
      api: '1.0'
      astute_sha: f7cda2171b0b677dfaeb59693d980a2d3ee4c3e0
      build_id: 2015-01-23_08-15-08
      build_number: '43'
      feature_groups:
      - mirantis
      fuellib_sha: 9aa913096fb93ea4847ee14bfaf33597326886f3
      fuelmain_sha: 1ee1766a51bdb5bed75d5c2efdcaaa318118e439
      nailgun_sha: 5c4d298bd9a702aafea486a9ea7a013cb64190ff
      ostf_sha: 3b57985d4d2155510894a1f6d03b478b201f7780
      production: docker
      release: 6.0.1

Baremetal+Centos+HA+Neutron-gre+LVM+Debug+6.0.1_43
controllers: 3
computes: 97

I have runed cinder rally tests on 10 env http://mos-scale.vm.mirantis.net:8080/view/ENV-10/job/10_env_run_rally_custom/5/console

On a compute in cinder-all log I have found:
on a compute node:
<159>Jan 23 09:54:06 node-10 cinder-volume Error reported running lvremove: CMD: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvremove --config activation
{ retry_deactivation = 1}
f cinder/_snapshot-658240e8-4ab7-4ce3-9724-e1901d4a9e40, RESPONSE: Unable to deactivate open cinder_snapshot-658240e84ab74ce39724-e1901d4a9e40-cow (253:19)
Failed to activate _snapshot-658240e8-4ab7-4ce3-9724-e1901d4a9e40.
libdevmapper exiting with 1 device(s) still suspended.

In messages log file:
<6>Jan 23 10:15:46 node-10 kernel: lvremove D 0000000000000004 0 20053 20052 0x00000080
<4>Jan 23 10:15:46 node-10 kernel: ffff8807fe5efa18 0000000000000086 ffff8807fe5ef978 ffffffff8126b3e4
<4>Jan 23 10:15:46 node-10 kernel: ffff88080c002980 ffff88086d025500 ffff8807fe5ef9e8 ffffffffa000461c
<4>Jan 23 10:15:46 node-10 kernel: ffff8807fe5ef9d8 ffff8807fe5ef9d8 ffff88086d84b058 ffff8807fe5effd8
<4>Jan 23 10:15:46 node-10 kernel: Call Trace:
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff8126b3e4>] ? blk_unplug+0x34/0x70
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffffa000461c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff810aaa21>] ? ktime_get_ts+0xb1/0xf0
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff81529643>] io_schedule+0x73/0xc0
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff811ce7dd>] __blockdev_direct_IO_newtrunc+0xb7d/0x1270
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff811ca140>] ? blkdev_get_block+0x0/0x20
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff8127a10d>] ? get_disk+0x7d/0xf0
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff811cef47>] __blockdev_direct_IO+0x77/0xe0
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff811ca140>] ? blkdev_get_block+0x0/0x20
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff811cb1c7>] blkdev_direct_IO+0x57/0x60
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff811ca140>] ? blkdev_get_block+0x0/0x20
<4>Jan 23 10:15:46 node-10 kernel: [<ffffffff8112616b>] generic_file_aio_read+0x6...

Read more...

Revision history for this message
Leontii Istomin (listomin) wrote :
Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :
Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :

Looks like the issue was successfully fixed, status changed to Fix Released.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.