VM launch /deletion fails in ocata-master-212

Bug #1784988 reported by vimal
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
Trunk
New
Medium
Abhay Joshi

Bug Description

VM launch /deletion fails in ocata-master-212 . This issue is seen only on master 212 ( passes on other builds master-214).

error seen
-----------

[root@nodec7 ~]# grep -r "ERROR" /var/lib/docker/volumes/kolla_logs/_data/nova/nova-scheduler.log | more
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters [-] DB exception wrapped.
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engi
ne/base.py", line 316, in connection
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters return self._revalidate_connection()
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engi
ne/base.py", line 391, in _revalidate_connection
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters "Can't reconnect until invalid "
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters InvalidRequestError: Can't reconnect until invalid transac
tion is rolled back
2018-08-01 14:23:53.148 7 ERROR oslo_db.sqlalchemy.exc_filters
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/servicegroup/d
rivers/db.py", line 87, in _report_state
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db service.service_ref.save()
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobjec
ts/base.py", line 226, in wrapper
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db return fn(self, *args, **kwargs)
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/servic
e.py", line 317, in save
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db db_service = db.service_update(self._context, self.id, u
pdates)
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/db/api.py", li
ne 183, in service_update
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db return IMPL.service_update(context, service_id, values)
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_db/api.py", li
ne 151, in wrapper
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db ectxt.value = e.inner_exc
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_utils/excutils
.py", line 220, in __exit__
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db self.force_reraise()
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_utils/excutils
.py", line 196, in force_reraise
2018-08-01 14:23:53.150 7 ERROR nova.servicegroup.drivers.db six.reraise(self.type_, self.value, self.tb)
[root@nodec7 ~]#
[root@nodec7 ~]# grep -r "ERROR" /var/lib/docker/volumes/kolla_logs/_data/nova/nova-conductor.log | more
2018-08-01 14:13:06.519 22 ERROR oslo.messaging._drivers.impl_rabbit [req-d9c1cb5c-cf74-4126-8b0a-d8f65f51dc4a - - - - -]
 [8cfd3b15-273c-4f4f-956e-f088cf287682] AMQP server on 10.204.216.64:5672 is unreachable: [Errno 111] ECONNREFUSED. Tryin
g again in 1 seconds. Client port: None
2018-08-01 14:13:07.527 22 ERROR oslo.messaging._drivers.impl_rabbit [req-d9c1cb5c-cf74-4126-8b0a-d8f65f51dc4a - - - - -]
 [8cfd3b15-273c-4f4f-956e-f088cf287682] AMQP server on 10.204.216.64:5672 is unreachable: [Errno 111] ECONNREFUSED. Tryin
g again in 1 seconds. Client port: None
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters [req-c211b413-8fd1-45bf-850d-d8448ec4bc9c - - - - -] DB e
xception wrapped.
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/eng
ine/base.py", line 316, in connection
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters return self._revalidate_connection()
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/eng
ine/base.py", line 391, in _revalidate_connection
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters "Can't reconnect until invalid "
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters InvalidRequestError: Can't reconnect until invalid transa
ction is rolled back
2018-08-01 14:23:53.372 20 ERROR oslo_db.sqlalchemy.exc_filters
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters [-] DB exception wrapped.
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/eng
ine/base.py", line 316, in connection
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters return self._revalidate_connection()
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/eng
ine/base.py", line 391, in _revalidate_connection
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters "Can't reconnect until invalid "
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters InvalidRequestError: Can't reconnect until invalid transa
ction is rolled back
2018-08-01 14:23:55.836 21 ERROR oslo_db.sqlalchemy.exc_filters
2018-08-01 14:23:55.837 21 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status
2018-08-01 14:23:55.837 21 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
2018-08-01 14:23:55.837 21 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/servicegroup/
drivers/db.py", line 87, in _report_state
2018-08-01 14:23:55.837 21 ERROR nova.servicegroup.drivers.db service.service_ref.save()
2018-08-01 14:23:55.837 21 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobje
[root@nodec7 ~]#

[root@nodec7 ~]# contrail-status
Pod Service Original Name State Status
analytics alarm-gen contrail-analytics-alarm-gen running Up 19 hours
analytics api contrail-analytics-api running Up 19 hours
analytics collector contrail-analytics-collector running Up 19 hours
analytics nodemgr contrail-nodemgr running Up 19 hours
analytics query-engine contrail-analytics-query-engine running Up 19 hours
analytics snmp-collector contrail-analytics-snmp-collector running Up 19 hours
analytics topology contrail-analytics-topology running Up 19 hours
config api contrail-controller-config-api running Up 19 hours
config device-manager contrail-controller-config-devicemgr running Up 19 hours
config nodemgr contrail-nodemgr running Up 19 hours
config schema contrail-controller-config-schema running Up 19 hours
config svc-monitor contrail-controller-config-svcmonitor running Up 19 hours
config-database cassandra contrail-external-cassandra running Up 19 hours
config-database nodemgr contrail-nodemgr running Up 19 hours
config-database rabbitmq contrail-external-rabbitmq running Up 19 hours
config-database zookeeper contrail-external-zookeeper running Up 19 hours
control control contrail-controller-control-control running Up 14 hours
control dns contrail-controller-control-dns running Up 19 hours
control named contrail-controller-control-named running Up 19 hours
control nodemgr contrail-nodemgr running Up 19 hours
database cassandra contrail-external-cassandra running Up 19 hours
database kafka contrail-external-kafka running Up 19 hours
database nodemgr contrail-nodemgr running Up 19 hours
database zookeeper contrail-external-zookeeper running Up 19 hours
webui job contrail-controller-webui-job running Up 19 hours
webui web contrail-controller-webui-web running Up 19 hours

== Contrail control ==
control: active
nodemgr: active
named: active
dns: active

== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active

== Contrail database ==
kafka: active
nodemgr: active
zookeeper: active
cassandra: active

== Contrail analytics ==
snmp-collector: active
query-engine: active
api: active
alarm-gen: active
nodemgr: active
collector: active
topology: active

== Contrail webui ==
web: active
job: active

== Contrail config ==
svc-monitor: backup
nodemgr: active
device-manager: backup
api: active
schema: backup

logs
----------------

/cs-shared/bugs/1784988
[vappachan@nodem3 1784988]$ ls
nova-conductor.log nova-novncproxy.log nova-placement-api.log nova-scheduler.log

Tags: sanity
vimal (vappachan)
description: updated
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.