VolumeNotCreated - Instance failed, cinder too slow with
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Won't Fix
|
Undecided
|
Unassigned | ||
OpenStack Compute (nova) |
Won't Fix
|
Undecided
|
Unassigned |
Bug Description
Hi,
I've found that under certain circumstances cinder does not create volumes fast enough.
I can launch an image from a new volume from image with 4GB. I use LVM to allocate space. After a while I found that the instance didn't worked.
Looking at logs I can find:
2014-01-23 21:44:15.337 2398 TRACE nova.compute.
2014-01-23 21:44:15.337 2398 TRACE nova.compute.
5-c83f440a988b did not finish being created even after we waited 66 seconds or 60 attempts.
I was looking around and the cinder was "downloading". I think it was taking the image from the image server and building the volume. I don't know why it took so long since installation is gigabit ethernet and even more, the image is in a instance launched on the cinder hardware machine. So it does not even any networking. All resolves internally.
Image is saucy (About 300MB).
The problem is that after a while volume creation finished and instance failed. So I recereated instance and made it work from volume with no problems.
How should I track where the processing slows down?
I know that iscsi attachment is slow. One of possible point of faillure is when you have iscsi target that are in a machine that's not reachable. This slows down the rest of processing but I'm not sure if this is a point here.
Anyway. I'm sure hardware is not the best but pretty decent. Raid1 array with WD black label. Good sata controller and Intel gigabit network cards. So disk should not be the problem. I'm thinking about networking/config related problem.
But I'm lost on this.
Any help.
tags: | added: volumes |
Changed in nova: | |
status: | New → Confirmed |
Changed in nova: | |
status: | Confirmed → Invalid |
Changed in cinder: | |
status: | New → Confirmed |
tags: | removed: ceph |
This is the target stats.
iscsiadm -m session -r 4 --stats 10.org. openstack: volume- 137bc77b- c9e6-47ba- b2f5-c83f440a98 8b, portal: 172.16.0.119,3260] failures: 0 us_hdr: 0
Stats for session [sid: 4, target: iqn.2010-
iSCSI SNMP:
txdata_octets: 1137908
rxdata_octets: 26062596
noptx_pdus: 0
scsicmd_pdus: 2601
tmfcmd_pdus: 0
login_pdus: 0
text_pdus: 0
dataout_pdus: 56
logout_pdus: 0
snack_pdus: 0
noprx_pdus: 0
scsirsp_pdus: 2601
tmfrsp_pdus: 0
textrsp_pdus: 0
datain_pdus: 2381
logoutrsp_pdus: 0
r2t_pdus: 27
async_pdus: 0
rjt_pdus: 0
digest_err: 0
timeout_err: 0
iSCSI Extended:
tx_sendpage_
rx_discontiguo
eh_abort_cnt: 0
ping 172.16.0.119
PING 172.16.0.119 (172.16.0.119) 56(84) bytes of data.
64 bytes from 172.16.0.119: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.0.119: icmp_seq=2 ttl=64 time=0.033 ms