Openstack HA : Provisioning of HA cluster is failing during ceilometer provisioning

Bug #1452392 reported by venu kolli
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R2.20
Fix Committed
Medium
Megh Bhatt
Trunk
Fix Committed
Medium
Megh Bhatt

Bug Description

Openstack HA , Provisioning of HA cluster is failing during ceilometer provisioning

This issue is observed on R2.2 build 9 with both juno and icehouse.

Please find the logs

2015-05-05 19:09:34:302753: [root@10.84.13.32] out: + service nova-scheduler restart
2015-05-05 19:09:34:302825: [root@10.84.13.32] out: nova-scheduler: ERROR (not running)
2015-05-05 19:09:34:403260: [root@10.84.13.32] out: nova-scheduler: started
2015-05-05 19:09:36:169413: [root@10.84.13.32] out: + for svc in nova-api nova-objectstore nova-scheduler nova-cert nova-console nova-consoleauth nova-novncproxy nova-conductor
2015-05-05 19:09:36:169592: [root@10.84.13.32] out: + service nova-cert restart
2015-05-05 19:09:36:169694: [root@10.84.13.32] out: nova-cert: unrecognized service
2015-05-05 19:09:36:169787: [root@10.84.13.32] out: + for svc in nova-api nova-objectstore nova-scheduler nova-cert nova-console nova-consoleauth nova-novncproxy nova-conductor
2015-05-05 19:09:36:169878: [root@10.84.13.32] out: + service nova-console restart
2015-05-05 19:09:36:169969: [root@10.84.13.32] out: nova-console: ERROR (not running)
2015-05-05 19:09:36:270709: [root@10.84.13.32] out: nova-console: started
2015-05-05 19:09:38:037008: [root@10.84.13.32] out: + for svc in nova-api nova-objectstore nova-scheduler nova-cert nova-console nova-consoleauth nova-novncproxy nova-conductor
2015-05-05 19:09:38:037400: [root@10.84.13.32] out: + service nova-consoleauth restart
2015-05-05 19:09:38:037645: [root@10.84.13.32] out: nova-consoleauth: ERROR (not running)
2015-05-05 19:09:38:138265: [root@10.84.13.32] out: nova-consoleauth: started
2015-05-05 19:09:39:340111: [root@10.84.13.32] out: + for svc in nova-api nova-objectstore nova-scheduler nova-cert nova-console nova-consoleauth nova-novncproxy nova-conductor
2015-05-05 19:09:39:340222: [root@10.84.13.32] out: + service nova-novncproxy restart
2015-05-05 19:09:39:340268: [root@10.84.13.32] out: nova-novncproxy: ERROR (not running)
2015-05-05 19:09:39:403899: [root@10.84.13.32] out: nova-novncproxy: started
2015-05-05 19:09:40:405339: [root@10.84.13.32] out: + for svc in nova-api nova-objectstore nova-scheduler nova-cert nova-console nova-consoleauth nova-novncproxy nova-conductor
2015-05-05 19:09:40:405527: [root@10.84.13.32] out: + service nova-conductor restart
2015-05-05 19:09:40:405625: [root@10.84.13.32] out: nova-conductor: ERROR (not running)
2015-05-05 19:09:40:506169: [root@10.84.13.32] out: nova-conductor: started
2015-05-05 19:09:42:440770: [root@10.84.13.32] out: [localhost] local: dpkg -l | grep contrail-heat
2015-05-05 19:09:42:448712: [root@10.84.13.32] out: ii contrail-heat 2.20-9 all OpenStack virtual network service - Opencontrail
2015-05-05 19:09:42:512494: [root@10.84.13.32] out: [localhost] local: sudo heat-server-setup.sh
2015-05-05 19:09:42:512649: [root@10.84.13.32] out: + '[' -f /etc/redhat-release ']'
2015-05-05 19:09:42:528274: [root@10.84.13.32] out: + '[' -f /etc/lsb-release ']'
2015-05-05 19:09:42:528420: [root@10.84.13.32] out: + egrep -q 'DISTRIB_ID.*Ubuntu' /etc/lsb-release
2015-05-05 19:09:42:528498: [root@10.84.13.32] out: + is_ubuntu=1
2015-05-05 19:09:42:529921: [root@10.84.13.32] out: + is_redhat=0
2015-05-05 19:09:42:530056: [root@10.84.13.32] out: + web_svc=apache2
2015-05-05 19:09:42:530134: [root@10.84.13.32] out: + mysql_svc=mysql
2015-05-05 19:09:42:530211: [root@10.84.13.32] out: + chkconfig mysql
2015-05-05 19:09:42:530283: [root@10.84.13.32] out: + ret=0
2015-05-05 19:09:42:534061: [root@10.84.13.32] out: + '[' 0 -ne 0 ']'
2015-05-05 19:09:42:534189: [root@10.84.13.32] out: ++ service mysql status
2015-05-05 19:09:42:535549: [root@10.84.13.32] out: + mysql_status=' * MySQL is running (PID: 1243)'
2015-05-05 19:09:42:667294: [root@10.84.13.32] out: + [[ * MySQL is running (PID: 1243) != *running* ]]
2015-05-05 19:09:42:667435: [root@10.84.13.32] out: + '[' '!' -f /etc/contrail/mysql.token ']'
2015-05-05 19:09:42:667514: [root@10.84.13.32] out: ++ cat /etc/contrail/mysql.token
2015-05-05 19:09:42:667615: [root@10.84.13.32] out: + MYSQL_TOKEN=f2488d62c291dcbb4991
2015-05-05 19:09:42:667688: [root@10.84.13.32] out: + source /etc/contrail/ctrl-details
2015-05-05 19:09:42:667760: [root@10.84.13.32] out: ++ SERVICE_TOKEN=e60399dfd4f4968de0fc
2015-05-05 19:09:42:667833: [root@10.84.13.32] out: ++ AUTH_PROTOCOL=http
2015-05-05 19:09:42:667905: [root@10.84.13.32] out: ++ QUANTUM_PROTOCOL=http
2015-05-05 19:09:42:667977: [root@10.84.13.32] out: ++ ADMIN_TOKEN=contrail123
2015-05-05 19:09:42:668045: [root@10.84.13.32] out: ++ CONTROLLER=192.168.10.1
2015-05-05 19:09:42:668112: [root@10.84.13.32] out: ++ API_SERVER=192.168.10.1
2015-05-05 19:09:42:668180: [root@10.84.13.32] out: ++ OSAPI_COMPUTE_WORKERS=40
2015-05-05 19:09:42:668247: [root@10.84.13.32] out: ++ CONDUCTOR_WORKERS=40
2015-05-05 19:09:42:668313: [root@10.84.13.32] out: ++ SELF_MGMT_IP=10.84.13.32
2015-05-05 19:09:42:668378: [root@10.84.13.32] out: ++ MEMCACHED_SERVERS=10.84.13.32:11211,10.84.13.33:11211,10.84.13.38:11211
2015-05-05 19:09:42:668444: [root@10.84.13.32] out: ++ AMQP_SERVER=192.168.10.210
2015-05-05 19:09:42:668511: [root@10.84.13.32] out: ++ QUANTUM=192.168.10.1
2015-05-05 19:09:42:668613: [root@10.84.13.32] out: ++ QUANTUM_PORT=9696
2015-05-05 19:09:42:668681: [root@10.84.13.32] out: ++ OPENSTACK_INDEX=1
2015-05-05 19:09:42:668743: [root@10.84.13.32] out: ++ INTERNAL_VIP=192.168.10.210
2015-05-05 19:09:42:668837: [root@10.84.13.32] out: ++ CONTRAIL_INTERNAL_VIP=192.168.10.210
2015-05-05 19:09:42:669023: [root@10.84.13.32] out: + ADMIN_USER=admin
2015-05-05 19:09:42:669093: [root@10.84.13.32] out: + ADMIN_TOKEN=contrail123
2015-05-05 19:09:42:669158: [root@10.84.13.32] out: + ADMIN_TENANT=admin
2015-05-05 19:09:42:669225: [root@10.84.13.32] out: + SERVICE_TOKEN=e60399dfd4f4968de0fc
2015-05-05 19:09:42:669292: [root@10.84.13.32] out: + OPENSTACK_INDEX=1
2015-05-05 19:09:42:669361: [root@10.84.13.32] out: + INTERNAL_VIP=192.168.10.210
2015-05-05 19:09:42:669430: [root@10.84.13.32] out: + CONTRAIL_INTERNAL_VIP=192.168.10.210
2015-05-05 19:09:42:669500: [root@10.84.13.32] out: + AMQP_PORT=5672
2015-05-05 19:09:42:669570: [root@10.84.13.32] out: + '[' 192.168.10.210 == 192.168.10.210 ']'
2015-05-05 19:09:42:669640: [root@10.84.13.32] out: + AMQP_PORT=5673
2015-05-05 19:09:42:669708: [root@10.84.13.32] out: + controller_ip=192.168.10.1
2015-05-05 19:09:42:669772: [root@10.84.13.32] out: + '[' 192.168.10.210 '!=' none ']'
2015-05-05 19:09:42:670070: [root@10.84.13.32] out: + controller_ip=192.168.10.210
2015-05-05 19:09:42:670156: [root@10.84.13.32] out: + cat
2015-05-05 19:09:42:670224: [root@10.84.13.32] out: + for APP in heat
2015-05-05 19:09:42:670291: [root@10.84.13.32] out: + '[' 1 -eq 1 ']'
2015-05-05 19:09:42:670358: [root@10.84.13.32] out: + openstack-db -y --init --service heat --rootpw f2488d62c291dcbb4991
2015-05-05 19:09:42:670426: [root@10.84.13.32] out: + systemctl --version
2015-05-05 19:09:42:670497: [root@10.84.13.32] out: + '[' -f /etc/redhat-release ']'
2015-05-05 19:09:42:670568: [root@10.84.13.32] out: + '[' -f /etc/lsb-release ']'
2015-05-05 19:09:42:670639: [root@10.84.13.32] out: + egrep -q 'DISTRIB_ID.*Ubuntu' /etc/lsb-release
2015-05-05 19:09:42:670704: [root@10.84.13.32] out: + is_ubuntu=1
2015-05-05 19:09:42:670773: [root@10.84.13.32] out: + is_redhat=0
2015-05-05 19:09:42:670851: [root@10.84.13.32] out: + web_svc=apache2
2015-05-05 19:09:42:670921: [root@10.84.13.32] out: + mysql_svc=mysql
2015-05-05 19:09:42:670990: [root@10.84.13.32] out: + mysql_pkg_chk='expr 1'
2015-05-05 19:09:42:671059: [root@10.84.13.32] out: + dbmanage_cmd='"$APP-manage $db_sync"'
2015-05-05 19:09:42:671130: [root@10.84.13.32] out: + '[' 6 -gt 0 ']'
2015-05-05 19:09:42:671199: [root@10.84.13.32] out: + case "$1" in
2015-05-05 19:09:42:671269: [root@10.84.13.32] out: + ASSUME_YES=yes
2015-05-05 19:09:42:671340: [root@10.84.13.32] out: + shift
2015-05-05 19:09:42:671409: [root@10.84.13.32] out: + '[' 5 -gt 0 ']'
2015-05-05 19:09:42:671475: [root@10.84.13.32] out: + case "$1" in
2015-05-05 19:09:42:671542: [root@10.84.13.32] out: + MODE=init
2015-05-05 19:09:42:671624: [root@10.84.13.32] out: + shift
2015-05-05 19:09:42:671693: [root@10.84.13.32] out: + '[' 4 -gt 0 ']'
2015-05-05 19:09:42:671759: [root@10.84.13.32] out: + case "$1" in
2015-05-05 19:09:42:671824: [root@10.84.13.32] out: + shift
2015-05-05 19:09:42:671891: [root@10.84.13.32] out: + APP=heat
2015-05-05 19:09:42:671957: [root@10.84.13.32] out: + shift
2015-05-05 19:09:42:672026: [root@10.84.13.32] out: + '[' 2 -gt 0 ']'
2015-05-05 19:09:42:672089: [root@10.84.13.32] out: + case "$1" in
2015-05-05 19:09:42:672154: [root@10.84.13.32] out: + shift
2015-05-05 19:09:42:672222: [root@10.84.13.32] out: + MYSQL_ROOT_PW=f2488d62c291dcbb4991
2015-05-05 19:09:42:672289: [root@10.84.13.32] out: + shift
2015-05-05 19:09:42:672357: [root@10.84.13.32] out: + '[' 0 -gt 0 ']'
2015-05-05 19:09:42:672427: [root@10.84.13.32] out: + '[' '!' init ']'
2015-05-05 19:09:42:672499: [root@10.84.13.32] out: + '[' '!' heat ']'
2015-05-05 19:09:42:672583: [root@10.84.13.32] out: + case "$APP" in
2015-05-05 19:09:42:672655: [root@10.84.13.32] out: + MYSQL_APP_PW_DEFAULT=heat
2015-05-05 19:09:42:672728: [root@10.84.13.32] out: + : heat
2015-05-05 19:09:42:672799: [root@10.84.13.32] out: + '[' heat = glance ']'
2015-05-05 19:09:42:672870: [root@10.84.13.32] out: + APP_CONFIG=/etc/heat/heat.conf
2015-05-05 19:09:42:672941: [root@10.84.13.32] out: + NEW_MYSQL_INSTALL=0
2015-05-05 19:09:42:673003: [root@10.84.13.32] out: + expr 1
2015-05-05 19:09:42:673066: [root@10.84.13.32] out: 1
2015-05-05 19:09:42:677666: [root@10.84.13.32] out: + service_running mysql
2015-05-05 19:09:42:680818: [root@10.84.13.32] out: + test ''
2015-05-05 19:09:42:680975: [root@10.84.13.32] out: + service mysql status
2015-05-05 19:09:42:681137: [root@10.84.13.32] out: + '[' 0 -eq 1 ']'
2015-05-05 19:09:42:781957: [root@10.84.13.32] out: + '[' '!' defined ']'
2015-05-05 19:09:42:782101: [root@10.84.13.32] out: + MYSQL_ROOT_PW_ARG=
2015-05-05 19:09:42:782176: [root@10.84.13.32] out: + '[' defined ']'
2015-05-05 19:09:42:782246: [root@10.84.13.32] out: + MYSQL_ROOT_PW_ARG=--password=f2488d62c291dcbb4991
2015-05-05 19:09:42:782317: [root@10.84.13.32] out: + echo 'SELECT 1;'
2015-05-05 19:09:42:782388: [root@10.84.13.32] out: + mysql -u root --password=f2488d62c291dcbb4991
2015-05-05 19:09:42:782459: [root@10.84.13.32] out: + echo 'Verified connectivity to MySQL.'
2015-05-05 19:09:42:782528: [root@10.84.13.32] out: Verified connectivity to MySQL.
2015-05-05 19:09:42:782598: [root@10.84.13.32] out: + '[' init = init ']'
2015-05-05 19:09:42:782667: [root@10.84.13.32] out: ++ echo 'SELECT COUNT(*) FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME='\''heat'\'';'
2015-05-05 19:09:42:782736: [root@10.84.13.32] out: ++ mysql -u root --password=f2488d62c291dcbb4991
2015-05-05 19:09:42:782807: [root@10.84.13.32] out: ++ tail -n+2
2015-05-05 19:09:42:782877: [root@10.84.13.32] out: + dbs=1
2015-05-05 19:09:42:799051: [root@10.84.13.32] out: + '[' 1 '!=' 0 ']'
2015-05-05 19:09:42:799191: [root@10.84.13.32] out: + echo 'Database '\''heat'\'' already exists. Please consider first running:'
2015-05-05 19:09:42:799268: [root@10.84.13.32] out: Database 'heat' already exists. Please consider first running:
2015-05-05 19:09:42:799345: [root@10.84.13.32] out: + echo '/usr/bin/openstack-db --drop --service heat'
2015-05-05 19:09:42:799418: [root@10.84.13.32] out: /usr/bin/openstack-db --drop --service heat
2015-05-05 19:09:42:799491: [root@10.84.13.32] out: + exit 1
2015-05-05 19:09:42:799561: [root@10.84.13.32] out: + heat-manage db_sync
2015-05-05 19:09:42:799632: [root@10.84.13.32] out: No handlers could be found for logger "heat.common.config"
2015-05-05 19:09:43:164257: [root@10.84.13.32] out: + export ADMIN_USER
2015-05-05 19:09:43:364783: [root@10.84.13.32] out: + export ADMIN_TOKEN
2015-05-05 19:09:43:364925: [root@10.84.13.32] out: + export ADMIN_TENANT
2015-05-05 19:09:43:365000: [root@10.84.13.32] out: + export SERVICE_TOKEN
2015-05-05 19:09:43:365072: [root@10.84.13.32] out: + for svc in heat
2015-05-05 19:09:43:365144: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf DEFAULT sql_connection mysql://heat:heat@127.0.0.1/heat
2015-05-05 19:09:43:365233: [root@10.84.13.32] out: + '[' 192.168.10.210 '!=' none ']'
2015-05-05 19:09:43:397090: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf DEFAULT sql_connection mysql://heat:heat@192.168.10.210:33306/heat
2015-05-05 19:09:43:397225: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf heat_api bind_port 8005
2015-05-05 19:09:43:461007: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf DEFAULT rpc_backend heat.openstack.common.rpc.impl_kombu
2015-05-05 19:09:43:524789: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf DEFAULT rabbit_host 192.168.10.210
2015-05-05 19:09:43:556503: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf DEFAULT rabbit_port 5673
2015-05-05 19:09:43:620362: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf DEFAULT plugin_dirs /usr/lib/heat/resources
2015-05-05 19:09:43:684144: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_uri http://192.168.10.210:5000/v2.0
2015-05-05 19:09:43:715827: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_host 192.168.10.210
2015-05-05 19:09:43:779633: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_port 35357
2015-05-05 19:09:43:843416: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_protocol http
2015-05-05 19:09:43:875163: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_tenant_name service
2015-05-05 19:09:43:938936: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_user heat
2015-05-05 19:09:43:970664: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_password contrail123
2015-05-05 19:09:44:034429: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf clients_contrail user admin
2015-05-05 19:09:44:066500: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf clients_contrail password contrail123
2015-05-05 19:09:44:130340: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf clients_contrail tenant admin
2015-05-05 19:09:44:194125: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf clients_contrail api_server 192.168.10.1
2015-05-05 19:09:44:225822: [root@10.84.13.32] out: + '[' 192.168.10.210 '!=' none ']'
2015-05-05 19:09:44:289824: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf clients_contrail api_server 192.168.10.210
2015-05-05 19:09:44:289970: [root@10.84.13.32] out: + openstack-config --set /etc/heat/heat.conf clients_contrail api_base_url /
2015-05-05 19:09:44:353809: [root@10.84.13.32] out: + echo '======= Enabling the services ======'
2015-05-05 19:09:44:385513: [root@10.84.13.32] out: ======= Enabling the services ======
2015-05-05 19:09:44:385649: [root@10.84.13.32] out: + for svc in supervisor-openstack
2015-05-05 19:09:44:385726: [root@10.84.13.32] out: + chkconfig supervisor-openstack on
2015-05-05 19:09:44:385798: [root@10.84.13.32] out: ++ service supervisor-openstack status
2015-05-05 19:09:44:385868: [root@10.84.13.32] out: ++ grep -s -i running
2015-05-05 19:09:44:385937: [root@10.84.13.32] out: ++ echo running
2015-05-05 19:09:44:418042: [root@10.84.13.32] out: + status=running
2015-05-05 19:09:44:418190: [root@10.84.13.32] out: + '[' running == stopped ']'
2015-05-05 19:09:44:418263: [root@10.84.13.32] out: + echo '======= Starting the services ======'
2015-05-05 19:09:44:418334: [root@10.84.13.32] out: ======= Starting the services ======
2015-05-05 19:09:44:418404: [root@10.84.13.32] out: + for svc in heat-api heat-engine
2015-05-05 19:09:44:418475: [root@10.84.13.32] out: + service heat-api restart
2015-05-05 19:09:44:418544: [root@10.84.13.32] out: heat-api: ERROR (not running)
2015-05-05 19:09:44:550714: [root@10.84.13.32] out: heat-api: started
2015-05-05 19:09:45:567559: [root@10.84.13.32] out: + for svc in heat-api heat-engine
2015-05-05 19:09:45:567694: [root@10.84.13.32] out: + service heat-engine restart
2015-05-05 19:09:45:567767: [root@10.84.13.32] out: heat-engine: ERROR (not running)
2015-05-05 19:09:45:699721: [root@10.84.13.32] out: heat-engine: started
2015-05-05 19:09:46:965054: [root@10.84.13.32] out: [localhost] local: service mysql restart
2015-05-05 19:09:46:972633: [root@10.84.13.32] out: * Stopping MySQL database server mysqld
2015-05-05 19:09:47:136656: [root@10.84.13.32] out: * Killing MySQL database server by signal mysqld
2015-05-05 19:09:47:152309: [root@10.84.13.32] out: ...done.
2015-05-05 19:09:51:173443: [root@10.84.13.32] out: * Starting MySQL database server mysqld
2015-05-05 19:09:51:205267: [root@10.84.13.32] out: ...done.
2015-05-05 19:09:57:245260: [root@10.84.13.32] out: [localhost] local: service supervisor-openstack restart
2015-05-05 19:09:57:245452: [root@10.84.13.32] out: supervisor-openstack stop/waiting
2015-05-05 19:10:22:480401: [root@10.84.13.32] out: supervisor-openstack start/running, process 18442
2015-05-05 19:10:22:480599: [root@10.84.13.32] out:
2015-05-05 19:10:22:481940:
2015-05-05 19:10:22:495215: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:22:495379: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:22:560821: [root@10.84.13.32] out:
2015-05-05 19:10:22:561094:
2015-05-05 19:10:22:561278: [root@10.84.13.32] sudo: dpkg-query -l "contrail-openstack-dashboard" | grep -q ^.i
2015-05-05 19:10:22:561478: [root@10.84.13.32] sudo: sed -i '/^HORIZON_CONFIG.*customization_module.*/d' /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:22:857115: [root@10.84.13.32] sudo: printf "HORIZON_CONFIG['customization_module'] = 'contrail_openstack_dashboard.overrides'
2015-05-05 19:10:22:923585: " >> /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:22:923696: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:22:978917: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:23:044893: [root@10.84.13.32] out:
2015-05-05 19:10:23:045186:
2015-05-05 19:10:23:045390: [root@10.84.13.32] sudo: sed -i '/LOGOUT_URL.*/d' /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:045566: [root@10.84.13.32] sudo: printf "LOGOUT_URL='/horizon/auth/logout/'
2015-05-05 19:10:23:133244: " >> /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:133338: [root@10.84.13.32] sudo: grep 'def hash_key' /etc/openstack-dashboard/local_settings.py

2015-05-05 19:10:23:277044: Warning: sudo() received nonzero return code 1 while executing 'grep 'def hash_key' /etc/openstack-dashboard/local_settings.py'!
2015-05-05 19:10:23:277044:
2015-05-05 19:10:23:277044: 2015-05-05 19:10:23:188843: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\ return new_key" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:277340: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\ new_key = m.hexdigest()" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:343671: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\ m.update(new_key)" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:410151: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\ m = hashlib.md5()" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:476058: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\ if len(new_key) > 250:" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:542829: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\ new_key = ':'.join([key_prefix, str(version), key])" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:614869: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\def hash_key(key, key_prefix, version):" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:681355: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\import hashlib" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:749097: [root@10.84.13.32] sudo: sed -i "/^SECRET_KEY.*/a\# To ensure key size of 250" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:848378: [root@10.84.13.32] sudo: sed -i "s/'LOCATION' : '127.0.0.1:11211',/'LOCATION' : '10.84.13.32:11211',/" /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:23:898954: [root@10.84.13.32] sudo: grep ''KEY_FUNCTION': hash_key,' /etc/openstack-dashboard/local_settings.py

2015-05-05 19:10:24:036852: Warning: sudo() received nonzero return code 1 while executing 'grep ''KEY_FUNCTION': hash_key,' /etc/openstack-dashboard/local_settings.py'!
2015-05-05 19:10:24:036852:
2015-05-05 19:10:24:036852: 2015-05-05 19:10:23:970181: [root@10.84.13.32] sudo: sed -i "/'LOCATION'.*/a\ 'KEY_FUNCTION': hash_key," /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:24:037006: [root@10.84.13.32] sudo: sed -i -e 's/OPENSTACK_HOST = "127.0.0.1"/OPENSTACK_HOST = "192.168.10.210"/' /etc/openstack-dashboard/local_settings.py
2015-05-05 19:10:24:105030: [root@10.84.13.32] sudo: service apache2 restart
2015-05-05 19:10:24:161173: [root@10.84.13.32] out: * Restarting web server apache2
2015-05-05 19:10:24:261050: [root@10.84.13.32] out: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
2015-05-05 19:10:25:495589: [root@10.84.13.32] out: ...done.
2015-05-05 19:10:25:499112: [root@10.84.13.32] out:
2015-05-05 19:10:25:499368:
2015-05-05 19:10:25:506907: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:25:507112: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:25:607297: [root@10.84.13.32] out:
2015-05-05 19:10:25:607492:
2015-05-05 19:10:25:607630: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:25:607828: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:25:712158: [root@10.84.13.32] out:
2015-05-05 19:10:25:712340:
2015-05-05 19:10:25:712495: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:25:712732: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:25:813578: [root@10.84.13.32] out:
2015-05-05 19:10:25:813783:
2015-05-05 19:10:25:817893: Dist is ubuntu
2015-05-05 19:10:25:818071: [root@10.84.13.32] sudo: dpkg -s nova-common | grep Version: | cut -d' ' -f2 | cut -d'-' -f1
2015-05-05 19:10:25:818178: [root@10.84.13.32] out: 1:2014.1.3
2015-05-05 19:10:25:954125: [root@10.84.13.32] out:
2015-05-05 19:10:25:954332:
2015-05-05 19:10:25:954710: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:25:954887: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:26:052284: [root@10.84.13.32] out:
2015-05-05 19:10:26:052496:
2015-05-05 19:10:26:052741: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:26:053177: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:26:151647: [root@10.84.13.32] out:
2015-05-05 19:10:26:151840:
2015-05-05 19:10:26:152079: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:26:152305: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:26:252251: [root@10.84.13.32] out:
2015-05-05 19:10:26:252459:
2015-05-05 19:10:26:252706: Dist is ubuntu
2015-05-05 19:10:26:252869: [root@10.84.13.32] sudo: dpkg -s nova-common | grep Version: | cut -d' ' -f2 | cut -d'-' -f1
2015-05-05 19:10:26:252968: [root@10.84.13.32] out: 1:2014.1.3
2015-05-05 19:10:26:387942: [root@10.84.13.32] out:
2015-05-05 19:10:26:388139:
2015-05-05 19:10:26:391321: [root@10.84.13.32] sudo: service mongodb stop
2015-05-05 19:10:26:391535: [root@10.84.13.32] out: mongodb stop/waiting
2015-05-05 19:10:26:476462: [root@10.84.13.32] out:
2015-05-05 19:10:26:476683:
2015-05-05 19:10:26:476835: [root@10.84.13.32] sudo: sed -i -e '/^[ ]*bind/s/^/#/' /etc/mongodb.conf
2015-05-05 19:10:26:476988: [root@10.84.13.32] sudo: service mongodb start
2015-05-05 19:10:26:543936: [root@10.84.13.32] out: mongodb start/running, process 19192
2015-05-05 19:10:26:643433: [root@10.84.13.32] out:
2015-05-05 19:10:26:643653:
2015-05-05 19:10:26:643888: [root@10.84.13.32] sudo: service mongodb status | grep not

2015-05-05 19:10:26:762535: Warning: sudo() received nonzero return code 1 while executing 'service mongodb status | grep not'!
2015-05-05 19:10:26:762535:
2015-05-05 19:10:26:762535: 2015-05-05 19:10:26:644047: [root@10.84.13.32] sudo: mongo --host 192.168.10.1 --quiet --eval "db.system.users.find({'user':'ceilometer'}).count()" ceilometer
2015-05-05 19:10:26:762694: [root@10.84.13.32] out: 0
2015-05-05 19:10:26:881667: [root@10.84.13.32] out:
2015-05-05 19:10:26:881868:
2015-05-05 19:10:26:884700: [root@10.84.13.32] sudo: mongo --host 192.168.10.1 --eval 'db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [ "readWrite", "dbAdmin" ]})'
2015-05-05 19:10:26:884873: [root@10.84.13.32] out: MongoDB shell version: 2.4.9
2015-05-05 19:10:26:983754: [root@10.84.13.32] out: connecting to: 192.168.10.1:27017/test
2015-05-05 19:10:27:047487: [root@10.84.13.32] out: {
2015-05-05 19:10:27:047631: [root@10.84.13.32] out: "user" : "ceilometer",
2015-05-05 19:10:27:047707: [root@10.84.13.32] out: "pwd" : "447c1db3b92df9035684b39ad9fa2185",
2015-05-05 19:10:27:047782: [root@10.84.13.32] out: "roles" : [
2015-05-05 19:10:27:047856: [root@10.84.13.32] out: "readWrite",
2015-05-05 19:10:27:047931: [root@10.84.13.32] out: "dbAdmin"
2015-05-05 19:10:27:048004: [root@10.84.13.32] out: ],
2015-05-05 19:10:27:048076: [root@10.84.13.32] out: "_id" : ObjectId("55497813888d57cacae0f2bf")
2015-05-05 19:10:27:048147: [root@10.84.13.32] out: }
2015-05-05 19:10:27:048218: [root@10.84.13.32] out:
2015-05-05 19:10:27:048912:
2015-05-05 19:10:27:051731: [root@10.84.13.32] sudo: openstack-config --set /etc/ceilometer/ceilometer.conf database connection mongodb://ceilometer:CEILOMETER_DBPASS@192.168.10.1:27017/ceilometer
2015-05-05 19:10:27:051919: [root@10.84.13.32] sudo: openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rabbit_host 192.168.10.210
2015-05-05 19:10:27:187340: [root@10.84.13.32] sudo: openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT log_dir /var/log/ceilometer
2015-05-05 19:10:27:326280: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:27:438597: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:27:521464: [root@10.84.13.32] out:
2015-05-05 19:10:27:521672:
2015-05-05 19:10:27:521935: [root@10.84.13.32] sudo: python -c 'from platform import linux_distribution; print linux_distribution()'
2015-05-05 19:10:27:522144: [root@10.84.13.32] out: ('Ubuntu', '14.04', 'trusty')
2015-05-05 19:10:27:623236: [root@10.84.13.32] out:
2015-05-05 19:10:27:623436:
2015-05-05 19:10:27:623575: Dist is ubuntu
2015-05-05 19:10:27:623733: [root@10.84.13.32] sudo: dpkg -s nova-common | grep Version: | cut -d' ' -f2 | cut -d'-' -f1
2015-05-05 19:10:27:623833: [root@10.84.13.32] out: 1:2014.1.3
2015-05-05 19:10:27:723139: [root@10.84.13.32] out:
2015-05-05 19:10:27:723346:
2015-05-05 19:10:27:723597: [root@10.84.13.32] sudo: openstack-config --set /etc/ceilometer/ceilometer.conf publisher metering_secret a74ca26452848001921c
2015-05-05 19:10:27:723752: [root@10.84.13.32] sudo: openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT auth_strategy keystone
2015-05-05 19:10:27:858449: [root@10.84.13.32] sudo: source /etc/contrail/openstackrc;keystone user-get ceilometer
2015-05-05 19:10:27:993327: [root@10.84.13.32] out: Authorization Failed: Service Unavailable (HTTP 503)
2015-05-05 19:10:28:359427: [root@10.84.13.32] out:
2015-05-05 19:10:28:375000:

2015-05-05 19:10:28:392394: Warning: sudo() received nonzero return code 1 while executing 'source /etc/contrail/openstackrc;keystone user-get ceilometer'!
2015-05-05 19:10:28:392394:
2015-05-05 19:10:28:392394: 2015-05-05 19:10:28:392280: [root@10.84.13.32] sudo: source /etc/contrail/openstackrc;keystone user-create --name=ceilometer --pass=CEILOMETER_PASS --tenant=service --<email address hidden>
2015-05-05 19:10:28:392533: [root@10.84.13.32] out: Authorization Failed: Service Unavailable (HTTP 503)
2015-05-05 19:10:28:727758: [root@10.84.13.32] out:
2015-05-05 19:10:28:735420:

2015-05-05 19:10:28:743309: Fatal error: sudo() received nonzero return code 1 while executing!
2015-05-05 19:10:28:743309:
2015-05-05 19:10:28:743309: Requested: source /etc/contrail/openstackrc;keystone user-create --name=ceilometer --pass=CEILOMETER_PASS --tenant=service --<email address hidden>
2015-05-05 19:10:28:743309: Executed: sudo -S -p 'sudo password:' /bin/bash -l -c "source /etc/contrail/openstackrc;keystone user-create --name=ceilometer --pass=CEILOMETER_PASS --tenant=service --<email address hidden>"
2015-05-05 19:10:28:743309:
2015-05-05 19:10:28:743343: Aborting.
2015-05-05 19:10:28:743343: 2015-05-05 19:10:28:743190: Disconnecting from 10.84.13.32... done.
2015-05-05 19:10:28:834692: Disconnecting from 10.84.13.2... done.
2015-05-05 19:10:28:923047: Disconnecting from 10.84.13.44... done.
2015-05-05 19:10:28:990556: Disconnecting from 10.84.13.33... done.
2015-05-05 19:10:29:018443: Disconnecting from 10.84.13.38... done.
2015-05-05 19:10:29:094877: Disconnecting from 10.84.13.22... done.
2015-05-05 19:10:29:191330: Connection to 10.84.13.32 closed.
+ debug_and_die 'fab setup_interface or setup_all task failed'
+ local 'message=fab setup_interface or setup_all task failed'
+ '[' 1 = 1 ']'
+ echo 'Testbed is set to be locked on failure'
Testbed is set to be locked on failure
+ [[ fab setup_interface or setup_all task failed =~ Test failures exceed ]]
+ export RELEASE_TESTBED=0
+ RELEASE_TESTBED=0
+ set -x
+ flock 5
+ echo 'Locking testbed /cs-shared/testbed_locks/testbed_sanity_a6s32.py for debugging'
Locking testbed /cs-shared/testbed_locks/testbed_sanity_a6s32.py for debugging
+ echo 'Testbed locked..Unlock when debug complete'
+ cat /cs-shared/testbed_locks/testbed_sanity_a6s32.py
Occupied! BRANCH: R2.20, BUILDID: 9, BUILD_TAG: jenkins-ubuntu-14-04_icehouse_ha_Sanity-21
Testbed locked..Unlock when debug complete
+ '[' -z 'fab setup_interface or setup_all task failed' ']'
+ echo '/home/stack/jenkins/workspace/ubuntu-14-04_icehouse_ha_Sanity/contrail-tools/testers/utils: line 330: bringup_setup: fab setup_interface or setup_all task failed.'
/home/stack/jenkins/workspace/ubuntu-14-04_icehouse_ha_Sanity/contrail-tools/testers/utils: line 330: bringup_setup: fab setup_interface or setup_all task failed.
+ cat /cs-shared/testbed_locks/testbed_sanity_a6s32.py
Occupied! BRANCH: R2.20, BUILDID: 9, BUILD_TAG: jenkins-ubuntu-14-04_icehouse_ha_Sanity-21
Testbed locked..Unlock when debug complete
+ python /home/stack/jenkins/workspace/ubuntu-14-04_icehouse_ha_Sanity/contrail-tools/testers/upload.py --pkg_name /github-build/R2.20/9/ubuntu-14-04/icehouse/contrail-install-packages_2.20-9~icehouse_all.deb --jenkins_id jenkins-ubuntu-14-04_icehouse_ha_Sanity-21
+ exit 1
+ cleanup
+ unlock_testbed testbed_sanity_a6s32.py
+ tb_lock_file=/cs-shared/testbed_locks/testbed_sanity_a6s32.py
+ export tb_lock_file
+ '[' 0 -ne 1 ']'
+ echo 'Skipping unlocking the testbed /cs-shared/testbed_locks/testbed_sanity_a6s32.py'
Skipping unlocking the testbed /cs-shared/testbed_locks/testbed_sanity_a6s32.py
+ return 0
Build step 'Execute shell' marked build as failure
Build step 'Groovy Postbuild' marked build as failure
[PostBuildScript] - Execution post build scripts.
[ubuntu-14-04_icehouse_ha_Sanity] $ /bin/sh -xe /tmp/hudson3784329035001078539.sh
+ bash -ex contrail-tools/testers/jenkins_unlock_testbed.sh
+ LOCK_FILE_DIR=/cs-shared/testbed_locks
+ echo 'Current active jobs: '
Current active jobs:
+ flock 5
++ grep -l 'Occupied! .*jenkins-ubuntu-14-04_icehouse_ha_Sanity-21' /cs-shared/testbed_locks/alok_testbed_multinode.py /cs-shared/testbed_locks/alok_testbed_single_node.py /cs-shared/testbed_locks/nodea11_a29.py /cs-shared/testbed_locks/nodeg25_k9.py /cs-shared/testbed_locks/nodei1_i5.py /cs-shared/testbed_locks/skiranh_testbed.py /cs-shared/testbed_locks/testbed_a21_webui.py /cs-shared/testbed_locks/testbed_a5_webui.py /cs-shared/testbed_locks/testbed_akumari.py /cs-shared/testbed_locks/testbed_anju.py /cs-shared/testbed_locks/testbed_c1.py /cs-shared/testbed_locks/testbed_centos.py /cs-shared/testbed_locks/testbed_cent.py /cs-shared/testbed_locks/testbed_ceph2.py /cs-shared/testbed_locks/testbed_ceph3_HA.py /cs-shared/testbed_locks/testbed_ceph3.py /cs-shared/testbed_locks/testbed_ceph_perf.py /cs-shared/testbed_locks/testbed_g13_webui.py /cs-shared/testbed_locks/testbed_g1_webui.py /cs-shared/testbed_locks/testbed_g20_webui.py /cs-shared/testbed_locks/testbed_g24_webui.py /cs-shared/testbed_locks/testbed_i11_i15.py /cs-shared/testbed_locks/testbed_i21_i25.py /cs-shared/testbed_locks/testbed_i34_i38-multi-interface.py /cs-shared/testbed_locks/testbed_i34_i38.py /cs-shared/testbed_locks/testbed_i34.py /cs-shared/testbed_locks/testbed_icehouse_centos_1_node_webui.py /cs-shared/testbed_locks/testbed_nodea26_3_node.py /cs-shared/testbed_locks/testbed_nodea8.py /cs-shared/testbed_locks/testbed_nodeb12_single_node.py /cs-shared/testbed_locks/testbed_nodec22.py /cs-shared/testbed_locks/testbed_nodec27.py /cs-shared/testbed_locks/testbed_nodec4.py /cs-shared/testbed_locks/testbed_nodeg24_ubuntu_webui.py /cs-shared/testbed_locks/testbed.py /cs-shared/testbed_locks/testbed_redhat.py /cs-shared/testbed_locks/testbed_regression_6_node_multi_intf.py /cs-shared/testbed_locks/testbed_regression_6_node.py /cs-shared/testbed_locks/testbed_regression_6_node_vgw_multi_intf.py /cs-shared/testbed_locks/testbed_regression_6_node_vgw.py /cs-shared/testbed_locks/testbed_sanity_a6s32_DataCtrlMgmt.py /cs-shared/testbed_locks/testbed_sanity_a6s32.py /cs-shared/testbed_locks/testbed_sanity_nodeb2.py /cs-shared/testbed_locks/testbed_sanity_nodeb5.py /cs-shared/testbed_locks/testbed_sanity_nodeb8.py /cs-shared/testbed_locks/testbed_sanity_nodeb9.py /cs-shared/testbed_locks/testbed_sanity_nodec21_DataCtrlMgmt.py /cs-shared/testbed_locks/testbed_sanity_nodei31.py /cs-shared/testbed_locks/testbed_skiranh.py /cs-shared/testbed_locks/testbed_smgr_ci_multi_node.py /cs-shared/testbed_locks/testbed_smgr_multi_interface.py /cs-shared/testbed_locks/testbed_smgr_multi_node.py /cs-shared/testbed_locks/testbed_smgr_nodeg32.py /cs-shared/testbed_locks/testbed_smgr_single14_node.py /cs-shared/testbed_locks/testbed_smgr_single_node.py /cs-shared/testbed_locks/testbeds_smgr_multi_node.py /cs-shared/testbed_locks/testbed_ubuntu.py /cs-shared/testbed_locks/test_smgr_multi_node.py
+ tb_lock_file=/cs-shared/testbed_locks/testbed_sanity_a6s32.py
+ [[ ! -s /cs-shared/testbed_locks/testbed_sanity_a6s32.py ]]
+ grep -xq 'Testbed locked..Unlock when debug complete' /cs-shared/testbed_locks/testbed_sanity_a6s32.py
+ echo 'Testbed /cs-shared/testbed_locks/testbed_sanity_a6s32.py is locked for debug...'
Testbed /cs-shared/testbed_locks/testbed_sanity_a6s32.py is locked for debug...
+ cat /cs-shared/testbed_locks/testbed_sanity_a6s32.py
Occupied! BRANCH: R2.20, BUILDID: 9, BUILD_TAG: jenkins-ubuntu-14-04_icehouse_ha_Sanity-21
Testbed locked..Unlock when debug complete
Email was triggered for: Failure
Sending email for trigger: Failure
Sending email to: <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden> <email address hidden>
Finished: FAILURE

Tags: blocker ha
venu kolli (vkolli)
Changed in juniperopenstack:
assignee: nobody → Ignatious Johnson Christopher (ijohnson-x)
Changed in juniperopenstack:
assignee: Ignatious Johnson Christopher (ijohnson-x) → Ranjeet R (rranjeet-n)
Sunil Bakhru (sbakhru)
Changed in juniperopenstack:
assignee: Ranjeet R (rranjeet-n) → Raj Reddy (rajreddy)
Megh Bhatt (meghb)
Changed in juniperopenstack:
assignee: Raj Reddy (rajreddy) → Megh Bhatt (meghb)
Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : R2.20

Review in progress for https://review.opencontrail.org/10047
Submitter: Megh Bhatt (<email address hidden>)

Megh Bhatt (meghb)
Changed in juniperopenstack:
status: New → In Progress
information type: Proprietary → Public
Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/10047
Committed: http://github.org/Juniper/contrail-fabric-utils/commit/d6fde3775d76fab8ff64338450be4ec2726dad29
Submitter: Zuul
Branch: R2.20

commit d6fde3775d76fab8ff64338450be4ec2726dad29
Author: Megh Bhatt <email address hidden>
Date: Wed May 6 22:06:08 2015 -0700

In HA setup, during setup_ceilometer_node, sometimes
keystone is still not up and we need to retry the
keystone command. Add handling of error message for
the same in HA since the error message is different
than non-HA setup due to HA proxy.
Closes-Bug: #1452392

Change-Id: I27fae55b749d2e194e42b33a0e8bd2e34ee2cdcc

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : master

Review in progress for https://review.opencontrail.org/10264
Submitter: Megh Bhatt (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/10264
Committed: http://github.org/Juniper/contrail-fabric-utils/commit/8ab2b6c08be5b3ef2975188a8c69c906b55d621b
Submitter: Zuul
Branch: master

commit 8ab2b6c08be5b3ef2975188a8c69c906b55d621b
Author: Megh Bhatt <email address hidden>
Date: Wed May 6 22:06:08 2015 -0700

In HA setup, during setup_ceilometer_node, sometimes
keystone is still not up and we need to retry the
keystone command. Add handling of error message for
the same in HA since the error message is different
than non-HA setup due to HA proxy.
Closes-Bug: #1452392

Change-Id: I27fae55b749d2e194e42b33a0e8bd2e34ee2cdcc
(cherry picked from commit d6fde3775d76fab8ff64338450be4ec2726dad29)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.