Error response from daemon: No such container: mariadb

Bug #1855268 reported by Viorel-Cosmin Miron
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla-ansible
Invalid
Undecided
Unassigned

Bug Description

**Bug Report**

What happened: We try to deploy our saved config, where we received:
RUNNING HANDLER [mariadb : remove restart policy from master mariadb] ******************************************************************************************************************************************
task path: /root/venv/share/kolla-ansible/ansible/roles/mariadb/handlers/main.yml:261
Using module file /root/venv/local/lib/python2.7/site-packages/ansible/modules/commands/command.py
Pipelining is enabled.
<ss> ESTABLISH SSH CONNECTION FOR USER: None
<ss> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/bcd6204063 ss '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-olcpabprnuscfzhimmpoicefozqsnwar ; /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<ss> (1, '\n{"changed": true, "end": "2019-12-05 13:03:53.344144", "stdout": "", "cmd": ["docker", "update", "--restart", "no", "mariadb"], "failed": true, "delta": "0:00:00.059409", "stderr": "Error response from daemon: No such container: mariadb", "rc": 1, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "strip_empty_ends": true, "_raw_params": "docker update --restart no mariadb", "removes": null, "argv": null, "warn": true, "chdir": null, "stdin_add_newline": true, "stdin": null}}, "start": "2019-12-05 13:03:53.284735", "msg": "non-zero return code"}\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Deprecated option "useroaming"\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 14507\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n')
<ss> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Deprecated option "useroaming"
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 14507
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
fatal: [ss]: FAILED! => {
    "changed": true,
    "cmd": [
        "docker",
        "update",
        "--restart",
        "no",
        "mariadb"
    ],
    "delta": "0:00:00.059409",
    "end": "2019-12-05 13:03:53.344144",
    "invocation": {
        "module_args": {
            "_raw_params": "docker update --restart no mariadb",
            "_uses_shell": false,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "stdin_add_newline": true,
            "strip_empty_ends": true,
            "warn": true
        }
    },
    "msg": "non-zero return code",
    "rc": 1,
    "start": "2019-12-05 13:03:53.284735",
    "stderr": "Error response from daemon: No such container: mariadb",
    "stderr_lines": [
        "Error response from daemon: No such container: mariadb"
    ],
    "stdout": "",
    "stdout_lines": []
}

What you expected to happen: get openstack horizon dashboard working

How to reproduce it (minimal and precise): using the globals bellow running boostrap, prechecks, deploy. rerults into fail of the mariadb docker restart.

**Environment**:
* OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

* Kernel (e.g. `uname -a`):
Linux ss 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

* Docker version if applicable (e.g. `docker version`):
Client: Docker Engine - Community
 Version: 19.03.5
 API version: 1.40
 Go version: go1.12.12
 Git commit: 633a0ea838
 Built: Wed Nov 13 07:29:52 2019
 OS/Arch: linux/amd64
 Experimental: false

Server: Docker Engine - Community
 Engine:
  Version: 19.03.5
  API version: 1.40 (minimum version 1.12)
  Go version: go1.12.12
  Git commit: 633a0ea838
  Built: Wed Nov 13 07:28:22 2019
  OS/Arch: linux/amd64
  Experimental: false
 containerd:
  Version: 1.2.10
  GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version: 1.0.0-rc8+dev
  GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version: 0.18.0
  GitCommit: fec3683

* Kolla-Ansible version (e.g. `git head or tag or stable branch` or pip package version if using release): 8.0.0/8.0.1
* Docker image Install type (source/binary): binary
* Docker image distribution: community
* Are you using official images from Docker Hub or self built? Official
* Share your inventory file, globals.yml and other configuration files if relevant

Revision history for this message
Viorel-Cosmin Miron (uhl-hosting) wrote :
Revision history for this message
Viorel-Cosmin Miron (uhl-hosting) wrote :

attached inventory

Changed in kolla-ansible:
assignee: nobody → Dincer Celik (osmanlicilegi)
Revision history for this message
Viorel-Cosmin Miron (uhl-hosting) wrote :
Download full text (4.1 KiB)

RUNNING HANDLER [mariadb : remove restart policy from master mariadb] ********************************************************************************
task path: /root/venv3/share/kolla-ansible/ansible/roles/mariadb/handlers/main.yml:261
Using module file /root/venv3/lib/python3.6/site-packages/ansible/modules/commands/command.py
Pipelining is enabled.
<ss> ESTABLISH SSH CONNECTION FOR USER: None
<ss> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/bcd6204063 ss '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-oxmtqmdggjuocrfilshhqyumahftabfr ; /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<ss> (1, b'\n{"changed": true, "end": "2019-12-15 12:48:20.238725", "stdout": "", "cmd": ["docker", "update", "--restart", "no", "mariadb"], "failed": true, "delta": "0:00:00.065252", "stderr": "Error response from daemon: No such container: mariadb", "rc": 1, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "strip_empty_ends": true, "_raw_params": "docker update --restart no mariadb", "removes": null, "argv": null, "warn": true, "chdir": null, "stdin_add_newline": true, "stdin": null}}, "start": "2019-12-15 12:48:20.173473", "msg": "non-zero return code"}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Deprecated option "useroaming"\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 23424\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n')
<ss> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Deprecated option "useroaming"
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 23424
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
fata...

Read more...

Revision history for this message
Viorel-Cosmin Miron (uhl-hosting) wrote :

Just as a reminder, this is causing the whole OpenStack to fail with 503 on Horizon, the deployment fails from this point on, making entire kolla deployment unnusable.

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

Could you share the full log? At least of all "mariadb :" tasks. It looks like the bootstrap logic got severely violated somehow.

Changed in kolla-ansible:
status: New → Incomplete
Revision history for this message
Viorel-Cosmin Miron (uhl-hosting) wrote :
Download full text (82.6 KiB)

PLAY [Apply role mariadb] ****************************************************************************************************************************
META: ran handlers

TASK [common : include_tasks] ************************************************************************************************************************
task path: /root/venv3/share/kolla-ansible/ansible/roles/common/tasks/main.yml:2
skipping: [ss] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

TASK [common : Registering common role has run] ******************************************************************************************************
task path: /root/venv3/share/kolla-ansible/ansible/roles/common/tasks/main.yml:6
skipping: [ss] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

TASK [mariadb : include_tasks] ***********************************************************************************************************************
task path: /root/venv3/share/kolla-ansible/ansible/roles/mariadb/tasks/main.yml:2
included: /root/venv3/share/kolla-ansible/ansible/roles/mariadb/tasks/deploy.yml for ss

TASK [mariadb : include_tasks] ***********************************************************************************************************************
task path: /root/venv3/share/kolla-ansible/ansible/roles/mariadb/tasks/deploy.yml:2
included: /root/venv3/share/kolla-ansible/ansible/roles/mariadb/tasks/config.yml for ss

TASK [mariadb : Ensuring config directories exist] ***************************************************************************************************
task path: /root/venv3/share/kolla-ansible/ansible/roles/mariadb/tasks/config.yml:2
Using module file /root/venv3/lib/python3.6/site-packages/ansible/modules/files/file.py
Pipelining is enabled.
<ss> ESTABLISH SSH CONNECTION FOR USER: None
<ss> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/bcd6204063 ss '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-fssswbsjzudfhydthbcuvxshmlwrqiew ; /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<ss> (0, b'\n{"group": "root", "uid": 0, "changed": true, "owner": "root", "state": "directory", "gid": 0, "mode": "0770", "path": "/etc/kolla//mariadb", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/etc/kolla//mariadb", "owner": "root", "follow": true, "group": "root", "unsafe_writes": null, "state": "directory", "content": null, "serole": null, "selevel": null, "setype": null, "access_time": null, "access_time_format": "%Y%m%d%H%M.%S", "modification_time": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": "0770", "modification_time_format": "%Y%m%d%H%M.%S", "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/kolla//mariadb", "state": "directory", "mode": "07...

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

Ok, so ansible sees that the mariadb volume is there and assumes no bootstrap is needed. It happens because the cleanup was not thorough enough. In this case the procedure to amend this should be to try running mariadb_recovery if one wants to preserve this all data. Otherwise, best to remove all lingering docker volumes. Lesson learnt - ask reporters about existing volumes. :-)

Revision history for this message
Viorel-Cosmin Miron (uhl-hosting) wrote :

ss : ok=461 changed=297 unreachable=0 failed=0 skipped=224 rescued=0 ignored=0

Thank you! this really helped, weird that the cleanup hasnt removed the volume.

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

I guess it was a safety precaution to at least not lose all the data.

Changed in kolla-ansible:
status: Incomplete → Opinion
status: Opinion → Invalid
assignee: Dincer Celik (osmanlicilegi) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.