Bug criticality has now decreased a little, since it's not anymore ranked #1.
This bug is however still on the podium sitting at #3.
It is perhaps a good idea to have a detailed look at the failures to understand whether this bug is still actually critical, and what we should do about it.
[1] is a score-by-branch for the past 24 hours.
About 20% of the failures occur on stable/havana. Considering that the frequency of jobs targeting stable/havana is probably less than 20% of overall jobs, this means that we've probably improved the resiliency to this bug for icehouse. However stable/havana failures appear to be all related to the neutron, and the reason has to be searched in the fact that the improvements for neutron have not been backported. Frankly I'm not sure they could be considered backportable at all as there are some large patches.
[2] is a score-by-job for master branch in the past 24 hours.
check-tempest-dvsm-heat-neutron-slow, currently non-voting, accounts for over 50% of the failures. Without this job, there would have been "only" 25 failures on the master branch in the past 24 hours. I have not yet submitted a tempest bug for this job, but the problem appear to be that is trying to ssh an instance over a private network, which I don't think it can work with neutron.
There are 16 failures with nova-network enabled (a little less than 30% of master failures). 7 of them however occur on grenade jobs. I have not yet looked into them. I hope someone from the nova team can help on this matter.
There are also 9 neutron failures. 6 of them all occurred in the same job, and the root cause is that the metadata service did not start. Currently devstack does not return an error when some neutron agents fail to start. Bug 128182 has been filed to this aim.
For the remaning 3 neutron failures, 2 of them are caused by a regex parsing error. This problem is being tracked by bug 1280827, for which there is a patch under review.
Bug criticality has now decreased a little, since it's not anymore ranked #1.
This bug is however still on the podium sitting at #3.
It is perhaps a good idea to have a detailed look at the failures to understand whether this bug is still actually critical, and what we should do about it.
[1] is a score-by-branch for the past 24 hours.
About 20% of the failures occur on stable/havana. Considering that the frequency of jobs targeting stable/havana is probably less than 20% of overall jobs, this means that we've probably improved the resiliency to this bug for icehouse. However stable/havana failures appear to be all related to the neutron, and the reason has to be searched in the fact that the improvements for neutron have not been backported. Frankly I'm not sure they could be considered backportable at all as there are some large patches.
[2] is a score-by-job for master branch in the past 24 hours.
check-tempest- dvsm-heat- neutron- slow, currently non-voting, accounts for over 50% of the failures. Without this job, there would have been "only" 25 failures on the master branch in the past 24 hours. I have not yet submitted a tempest bug for this job, but the problem appear to be that is trying to ssh an instance over a private network, which I don't think it can work with neutron.
There are 16 failures with nova-network enabled (a little less than 30% of master failures). 7 of them however occur on grenade jobs. I have not yet looked into them. I hope someone from the nova team can help on this matter.
There are also 9 neutron failures. 6 of them all occurred in the same job, and the root cause is that the metadata service did not start. Currently devstack does not return an error when some neutron agents fail to start. Bug 128182 has been filed to this aim.
For the remaning 3 neutron failures, 2 of them are caused by a regex parsing error. This problem is being tracked by bug 1280827, for which there is a patch under review.
The third failure is being investigated.
[1] http:// logstash. openstack. org/#eyJmaWVsZH MiOltdLCJzZWFyY 2giOiJtZXNzYWdl OlwiU1NIVGltZW9 1dDogQ29ubmVjdG lvbiB0byB0aGVcI iBBTkQgbWVzc2Fn ZTpcInZpYSBTU0g gdGltZWQgb3V0Ll wiIEFORCBmaWxlb mFtZTpcImNvbnNv bGUuaHRtbFwiIiw idGltZWZyYW1lIj oiODY0MDAiLCJnc mFwaG1vZGUiOiJj b3VudCIsIm9mZnN ldCI6MCwidGltZS I6eyJ1c2VyX2lud GVydmFsIjowfSwi c3RhbXAiOjEzOTI 2NTQ5Mjk4NzIsIm 1vZGUiOiJzY29yZ SIsImFuYWx5emVf ZmllbGQiOiJidWl sZF9icmFuY2gifQ ==
[2] http:// logstash. openstack. org/#eyJmaWVsZH MiOltdLCJzZWFyY 2giOiJtZXNzYWdl OlwiU1NIVGltZW9 1dDogQ29ubmVjdG lvbiB0byB0aGVcI iBBTkQgbWVzc2Fn ZTpcInZpYSBTU0g gdGltZWQgb3V0Ll wiIEFORCBmaWxlb mFtZTpcImNvbnNv bGUuaHRtbFwiIEF ORCBidWlsZF9icm FuY2g6XCJtYXN0Z XJcIiIsInRpbWVm cmFtZSI6Ijg2NDA wIiwiZ3JhcGhtb2 RlIjoiY291bnQiL CJvZmZzZXQiOjAs InRpbWUiOnsidXN lcl9pbnRlcnZhbC I6MH0sInN0YW1wI joxMzkyNjU0OTI5 ODcyLCJtb2RlIjo ic2NvcmUiLCJhbm FseXplX2ZpZWxkI joiYnVpbGRfbmFt ZSJ9