2019-03-27 19:34:45 |
Jim Golden |
bug |
|
|
added bug |
2019-03-27 20:06:32 |
Jim Golden |
description |
I am running a new deploy of Rocky 18.1.5 on Ubuntu 18.04 hosts, following the ceph example configuration options.
openstack-ansible setup-hosts.yml completes successfully
openstack-ansible setup-infrastructure.yml gets to "Retrieve keyrings for openstack clients from ceph cluster when it fails with the following:
failed: [infra1_glance_container-a0526924 -> 172.29.236.11] (item=glance) => {
"attempts": 3,
"changed": false,
"cmd": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"delta": "0:00:00.004645",
"end": "2019-03-27 18:42:48.097148",
"invocation": {
"module_args": {
"_raw_params": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "glance",
"msg": "non-zero return code",
"rc": 127,
"start": "2019-03-27 18:42:48.092503",
"stderr": "/bin/sh: 1: ceph: not found",
"stderr_lines": [
"/bin/sh: 1: ceph: not found"
],
"stdout": "",
"stdout_lines": []
user_variables.yml contains:
ceph_pkg_source: ceph
ceph_stable_release: mimic
## Common Ceph Overrides
ceph_mons:
- 172.29.236.11
- 172.29.236.12
- 172.29.236.13
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
fsid: 31371b52-9e73-40cd-94a7-c2aa2bac7c7e # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options availble.
monitor_address_block: "172.29.236.0/22"
public_network: "172.29.236.0/22"
cluster_network: "172.29.244.0/22"
osd_scenario: collocated
osd_auto_discovery: true
journal_size: 10240 # size in MB
# ceph-ansible automatically creates pools & keys for OpenStack services
openstack_config: true
cinder_ceph_client: cinder
#cinder_backup_ceph_client: cinder-backup
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
glance_rbd_store_user: '{{ glance_ceph_client }}'
nova_libvirt_images_rbd_pool: vms
Ceph cluster is in a good state. |
I am running a new deploy of Rocky 18.1.5 on Ubuntu 18.04 hosts, following the ceph example configuration options.
openstack-ansible setup-hosts.yml completes successfully
openstack-ansible setup-infrastructure.yml completes successfully
openstack-ansible setup-openstack.yml (specifically os-glance-install.yml) gets to "Retrieve keyrings for openstack clients from ceph cluster when it fails with the following:
failed: [infra1_glance_container-a0526924 -> 172.29.236.11] (item=glance) => {
"attempts": 3,
"changed": false,
"cmd": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"delta": "0:00:00.004645",
"end": "2019-03-27 18:42:48.097148",
"invocation": {
"module_args": {
"_raw_params": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "glance",
"msg": "non-zero return code",
"rc": 127,
"start": "2019-03-27 18:42:48.092503",
"stderr": "/bin/sh: 1: ceph: not found",
"stderr_lines": [
"/bin/sh: 1: ceph: not found"
],
"stdout": "",
"stdout_lines": []
user_variables.yml contains:
ceph_pkg_source: ceph
ceph_stable_release: mimic
## Common Ceph Overrides
ceph_mons:
- 172.29.236.11
- 172.29.236.12
- 172.29.236.13
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
fsid: 31371b52-9e73-40cd-94a7-c2aa2bac7c7e # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options availble.
monitor_address_block: "172.29.236.0/22"
public_network: "172.29.236.0/22"
cluster_network: "172.29.244.0/22"
osd_scenario: collocated
osd_auto_discovery: true
journal_size: 10240 # size in MB
# ceph-ansible automatically creates pools & keys for OpenStack services
openstack_config: true
cinder_ceph_client: cinder
#cinder_backup_ceph_client: cinder-backup
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
glance_rbd_store_user: '{{ glance_ceph_client }}'
nova_libvirt_images_rbd_pool: vms
Ceph cluster is in a good state. |
|
2019-03-27 20:13:54 |
Jim Golden |
description |
I am running a new deploy of Rocky 18.1.5 on Ubuntu 18.04 hosts, following the ceph example configuration options.
openstack-ansible setup-hosts.yml completes successfully
openstack-ansible setup-infrastructure.yml completes successfully
openstack-ansible setup-openstack.yml (specifically os-glance-install.yml) gets to "Retrieve keyrings for openstack clients from ceph cluster when it fails with the following:
failed: [infra1_glance_container-a0526924 -> 172.29.236.11] (item=glance) => {
"attempts": 3,
"changed": false,
"cmd": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"delta": "0:00:00.004645",
"end": "2019-03-27 18:42:48.097148",
"invocation": {
"module_args": {
"_raw_params": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "glance",
"msg": "non-zero return code",
"rc": 127,
"start": "2019-03-27 18:42:48.092503",
"stderr": "/bin/sh: 1: ceph: not found",
"stderr_lines": [
"/bin/sh: 1: ceph: not found"
],
"stdout": "",
"stdout_lines": []
user_variables.yml contains:
ceph_pkg_source: ceph
ceph_stable_release: mimic
## Common Ceph Overrides
ceph_mons:
- 172.29.236.11
- 172.29.236.12
- 172.29.236.13
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
fsid: 31371b52-9e73-40cd-94a7-c2aa2bac7c7e # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options availble.
monitor_address_block: "172.29.236.0/22"
public_network: "172.29.236.0/22"
cluster_network: "172.29.244.0/22"
osd_scenario: collocated
osd_auto_discovery: true
journal_size: 10240 # size in MB
# ceph-ansible automatically creates pools & keys for OpenStack services
openstack_config: true
cinder_ceph_client: cinder
#cinder_backup_ceph_client: cinder-backup
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
glance_rbd_store_user: '{{ glance_ceph_client }}'
nova_libvirt_images_rbd_pool: vms
Ceph cluster is in a good state. |
I am running a new deploy of Rocky 18.1.5 on Ubuntu 18.04 hosts, following the ceph example configuration options.
openstack-ansible setup-hosts.yml completes successfully
openstack-ansible setup-infrastructure.yml completes successfully
openstack-ansible setup-openstack.yml (specifically os-glance-install.yml) gets to "Retrieve keyrings for openstack clients from ceph cluster" when it fails with the following:
failed: [infra1_glance_container-a0526924 -> 172.29.236.11] (item=glance) => {
"attempts": 3,
"changed": false,
"cmd": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"delta": "0:00:00.004645",
"end": "2019-03-27 18:42:48.097148",
"invocation": {
"module_args": {
"_raw_params": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "glance",
"msg": "non-zero return code",
"rc": 127,
"start": "2019-03-27 18:42:48.092503",
"stderr": "/bin/sh: 1: ceph: not found",
"stderr_lines": [
"/bin/sh: 1: ceph: not found"
],
"stdout": "",
"stdout_lines": []
user_variables.yml contains:
ceph_pkg_source: ceph
ceph_stable_release: mimic
## Common Ceph Overrides
ceph_mons:
- 172.29.236.11
- 172.29.236.12
- 172.29.236.13
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
fsid: 31371b52-9e73-40cd-94a7-c2aa2bac7c7e # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options availble.
monitor_address_block: "172.29.236.0/22"
public_network: "172.29.236.0/22"
cluster_network: "172.29.244.0/22"
osd_scenario: collocated
osd_auto_discovery: true
journal_size: 10240 # size in MB
# ceph-ansible automatically creates pools & keys for OpenStack services
openstack_config: true
cinder_ceph_client: cinder
#cinder_backup_ceph_client: cinder-backup
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
glance_rbd_store_user: '{{ glance_ceph_client }}'
nova_libvirt_images_rbd_pool: vms
This will also fail if I do not include the ceph.conf in the user_variables.yml:
"[ceph_client : Get ceph.conf and store contents when ceph_conf_file is not defined]"
fatal: [infra1_glance_container-a0526924 -> 172.29.236.11]: FAILED! => {"changed": false, "msg": "file not found: /etc/ceph/ceph.conf"} |
|
2019-03-27 20:15:03 |
Jim Golden |
description |
I am running a new deploy of Rocky 18.1.5 on Ubuntu 18.04 hosts, following the ceph example configuration options.
openstack-ansible setup-hosts.yml completes successfully
openstack-ansible setup-infrastructure.yml completes successfully
openstack-ansible setup-openstack.yml (specifically os-glance-install.yml) gets to "Retrieve keyrings for openstack clients from ceph cluster" when it fails with the following:
failed: [infra1_glance_container-a0526924 -> 172.29.236.11] (item=glance) => {
"attempts": 3,
"changed": false,
"cmd": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"delta": "0:00:00.004645",
"end": "2019-03-27 18:42:48.097148",
"invocation": {
"module_args": {
"_raw_params": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "glance",
"msg": "non-zero return code",
"rc": 127,
"start": "2019-03-27 18:42:48.092503",
"stderr": "/bin/sh: 1: ceph: not found",
"stderr_lines": [
"/bin/sh: 1: ceph: not found"
],
"stdout": "",
"stdout_lines": []
user_variables.yml contains:
ceph_pkg_source: ceph
ceph_stable_release: mimic
## Common Ceph Overrides
ceph_mons:
- 172.29.236.11
- 172.29.236.12
- 172.29.236.13
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
fsid: 31371b52-9e73-40cd-94a7-c2aa2bac7c7e # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options availble.
monitor_address_block: "172.29.236.0/22"
public_network: "172.29.236.0/22"
cluster_network: "172.29.244.0/22"
osd_scenario: collocated
osd_auto_discovery: true
journal_size: 10240 # size in MB
# ceph-ansible automatically creates pools & keys for OpenStack services
openstack_config: true
cinder_ceph_client: cinder
#cinder_backup_ceph_client: cinder-backup
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
glance_rbd_store_user: '{{ glance_ceph_client }}'
nova_libvirt_images_rbd_pool: vms
This will also fail if I do not include the ceph.conf in the user_variables.yml:
"[ceph_client : Get ceph.conf and store contents when ceph_conf_file is not defined]"
fatal: [infra1_glance_container-a0526924 -> 172.29.236.11]: FAILED! => {"changed": false, "msg": "file not found: /etc/ceph/ceph.conf"} |
I am running a new deploy of Rocky 18.1.5 on Ubuntu 18.04 hosts, following the ceph example configuration options.
openstack-ansible setup-hosts.yml completes successfully
openstack-ansible setup-infrastructure.yml completes successfully
openstack-ansible setup-openstack.yml (specifically os-glance-install.yml) gets to
"Ceph_Client: Retrieve keyrings for openstack clients from ceph cluster"
when it fails with the following:
failed: [infra1_glance_container-a0526924 -> 172.29.236.11] (item=glance) => {
"attempts": 3,
"changed": false,
"cmd": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"delta": "0:00:00.004645",
"end": "2019-03-27 18:42:48.097148",
"invocation": {
"module_args": {
"_raw_params": "ceph auth get client.glance >/dev/null && ceph auth get-or-create client.glance",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "glance",
"msg": "non-zero return code",
"rc": 127,
"start": "2019-03-27 18:42:48.092503",
"stderr": "/bin/sh: 1: ceph: not found",
"stderr_lines": [
"/bin/sh: 1: ceph: not found"
],
"stdout": "",
"stdout_lines": []
user_variables.yml contains:
ceph_pkg_source: ceph
ceph_stable_release: mimic
## Common Ceph Overrides
ceph_mons:
- 172.29.236.11
- 172.29.236.12
- 172.29.236.13
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
fsid: 31371b52-9e73-40cd-94a7-c2aa2bac7c7e # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options availble.
monitor_address_block: "172.29.236.0/22"
public_network: "172.29.236.0/22"
cluster_network: "172.29.244.0/22"
osd_scenario: collocated
osd_auto_discovery: true
journal_size: 10240 # size in MB
# ceph-ansible automatically creates pools & keys for OpenStack services
openstack_config: true
cinder_ceph_client: cinder
#cinder_backup_ceph_client: cinder-backup
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
glance_rbd_store_user: '{{ glance_ceph_client }}'
nova_libvirt_images_rbd_pool: vms
This will also fail if I do not include the ceph.conf in the user_variables.yml:
"[ceph_client : Get ceph.conf and store contents when ceph_conf_file is not defined]"
fatal: [infra1_glance_container-a0526924 -> 172.29.236.11]: FAILED! => {"changed": false, "msg": "file not found: /etc/ceph/ceph.conf"} |
|
2021-02-11 09:40:17 |
Dmitriy Rabotyagov |
openstack-ansible: status |
New |
Invalid |
|