destroy tool is incomplete

Bug #1672240 reported by MarginHu
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
kolla
Invalid
High
zhubingbing

Bug Description

Hi Guys,

I think destory tools is incomplete,the scene is:

1.build an openstack environment,and it's working well.
2.It happened a mariadb issue, database is crashed.
3.run destroy ,then rebuild the environment, but maridb service always is abnormal because the old data.

so I think we should clean all data contained mariadb (maybe and other component) when destroy the environment.

Revision history for this message
zhubingbing (zhubingbing) wrote :

good ;

Changed in kolla:
status: New → Confirmed
importance: Undecided → High
Revision history for this message
Duong Ha-Quang (duonghq) wrote :

You should change the title to reflect description better: this bug only cover mariadb component.

Revision history for this message
Steven Dake (sdake) wrote :

This may cover all named volumes in general that are not cleaned up by the destroy feature.

Changed in kolla:
assignee: nobody → zhubingbing (zhubingbing)
milestone: none → pike-1
Revision history for this message
Duong Ha-Quang (duonghq) wrote :

@Steve: so we should update the description?

Revision history for this message
MarginHu (margin2017) wrote :
Download full text (4.4 KiB)

TASK [mariadb : include] *******************************************************
included: /opt/bgi-kolla/kolla-ansible/ansible/roles/mariadb/tasks/start.yml for kode0, kode1, kode2

TASK [mariadb : Starting mariadb container] ************************************
skipping: [kode0]
changed: [kode1]
changed: [kode2]

TASK [mariadb : Waiting for MariaDB service to be ready] ***********************
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (10 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (10 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (10 retries left).
ok: [kode0]
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (9 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (8 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (9 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (7 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (6 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (8 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (5 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (7 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (4 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (6 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (3 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (5 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (2 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (4 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (1 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (3 retries left).
fatal: [kode1]: FAILED! => {"attempts": 10, "changed": false, "failed": true, "module_stderr": "Shared connection to kode1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_f30bRe/ansible_module_wait_for.py\", line 540, in <module>\r\n main()\r\n File \"/tmp/ansible_f30bRe/ansible_module_wait_for.py\", line 481, in main\r\n response = s.recv(1024)\r\nsocket.error: [Errno 104] Connection reset by peer\r\n", "msg": "MODULE FAILURE"}
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (2 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (1 retries left).
fatal: [kode2]: FAILED! => {"attempts": 10, "changed": false, "failed": true, "module_stderr": "Shared connection to kode2 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_PzGW_h/ansible_module_wait_for.py\", line 540, in <module>\r\n main()\r\n File \"/tmp/ansible_PzGW_h/ansible_module_wait_for.py\",...

Read more...

Revision history for this message
zhubingbing (zhubingbing) wrote :

can u run kolla-ansible destroy --yes, next run
kolla-ansbile prechecks
kolla-ansible deploy

Revision history for this message
MarginHu (margin2017) wrote :

now, the issue hasn't been reproduced , I remember I clean up all iptable rules which is left in last deployment.

Is it something about the issue ?

Revision history for this message
zhubingbing (zhubingbing) wrote :

i ' cant reproduce this bug.

Changed in kolla:
status: Confirmed → Invalid
Revision history for this message
MarginHu (margin2017) wrote :

now this time I reproduced the issue even after following comment #6.

Revision history for this message
MarginHu (margin2017) wrote :
Revision history for this message
MarginHu (margin2017) wrote :
Download full text (8.5 KiB)

[root@kode1 docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9df398bdc781 192.168.103.16:5000/bgi/centos-binary-mariadb:4.0.0.1 "kolla_start" 2 minutes ago Restarting (0) 4 seconds ago mariadb
bf5d97a8776a 192.168.103.16:5000/bgi/centos-binary-memcached:4.0.0.1 "kolla_start" 2 minutes ago Up 2 minutes memcached
0ea71f1067da 192.168.103.16:5000/bgi/centos-binary-kibana:4.0.0.1 "kolla_start" 2 minutes ago Up 2 minutes kibana
b7ed891f11eb 192.168.103.16:5000/bgi/centos-binary-elasticsearch:4.0.0.1 "kolla_start" 4 minutes ago Up 4 minutes elasticsearch
578467f2157e 192.168.103.16:5000/bgi/centos-binary-cron:4.0.0.1 "kolla_start" 4 minutes ago Up 4 minutes cron
d48b5b4ab9d5 192.168.103.16:5000/bgi/centos-binary-kolla-toolbox:4.0.0.1 "kolla_start" 4 minutes ago Up 4 minutes kolla_toolbox
1c6ecbcb451c 192.168.103.16:5000/bgi/centos-binary-fluentd:4.0.0.1 "kolla_start" 4 minutes ago Up 4 minutes fluentd
[root@kode1 docker]# cd /var/lib/docker/volumes/kolla_logs/_data/mariadb/
[root@kode1 mariadb]# ls
mariadb.log

[root@kode1 mariadb]# tail -n 100 mariadb.log
        0745c975,0
        1c699739,0
        319270e6,0
        46b83db9,0
        5bdeb2c7,0
        5dc7f26e,0
        7338683d,0
        885fabba,0
        9d856ffe,0
        9eed86eb,0
        b2ae1580,0
        c7cff5ac,0
        dcf41ff9,0
        f21f343e,0
})
170317 21:48:49 [Note] WSREP: (710c31d9, 'tcp://192.168.102.21:4567') turning message relay requesting off
170317 21:49:16 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
         at gcomm/src/pc.cpp:connect():158
170317 21:49:16 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
170317 21:49:16 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel 'openstack' at 'gcomm://192.168.102.20:4567,192.168.102.21:4567,192.168.102.22:4567': -110 (Connection timed out)
170317 21:49:16 [ERROR] WSREP: gcs connect failed: Connection timed out
170317 21:49:16 [ERROR] WSREP: wsrep::connect(gcomm://192.168.102.20:4567,192.168.102.21:4567,192.168.102.22:4567) failed: 7
170317 21:49:16 [ERROR] Aborting

170317 21:49:16 [Note] WSREP: Service disconnected.
170317 21:49:17 [Note] WSREP: Some threads may fail to exit.
170317 21:49:17 [Note] /usr/sbin/mysqld: Shutdown complete

170317 21:49:17 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended
170317 21:49:18 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/
170317 21:49:18 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/my...

Read more...

Revision history for this message
MarginHu (margin2017) wrote :
Download full text (5.8 KiB)

I try to destroy with clean image then deploy the environment but still reproduced the issue.

this time :
TASK [mariadb : Waiting for MariaDB service to be ready] ***********************
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (10 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (10 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (10 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (9 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (9 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (9 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (8 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (8 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (7 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (7 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (8 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (6 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (6 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (5 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (5 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (7 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (4 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (4 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (3 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (3 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (6 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (2 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (2 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (1 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (1 retries left).
FAILED - RETRYING: TASK: mariadb : Waiting for MariaDB service to be ready (5 retries left).
fatal: [kode1]: FAILED! => {"attempts": 10, "changed": false, "failed": true, "module_stderr": "Shared connection to kode1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_Q9j2N1/ansible_module_wait_for.py\", line 540, in <module>\r\n main()\r\n File \"/tmp/ansible_Q9j2N1/ansible_module_wait_for.py\", line 481, in main\r\n response = s.recv(1024)\r\nsocket.error: [Errno 104] Connection reset by peer\r\n", "msg": "MODULE FAILURE"}
fatal: [kode2]: FAILED! => {"attempts": 10, "changed": false, "failed": true, "module_stderr": "Shared con...

Read more...

Revision history for this message
MarginHu (margin2017) wrote :
Revision history for this message
Serge Radinovich (srad015) wrote :

I wiped /etc/ansible and re-installed kolla-ansible to fix this issue.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.