2018-04-12 12:35:14 |
Vladislav Belogrudov |
description |
TASK [cinder : Creating ceph pool] *********************************************
fatal: [10.196.244.201]: FAILED! => {"_ansible_parsed": true, "stderr_lines": ["Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)"], "changed": false, "end": "2018-04-12 12:42:29.120658", "_ansible_no_log": false, "_ansible_delegated_vars": {"ansible_host": "10.196.244.201"}, "cmd": ["docker", "exec", "ceph_mon", "ceph", "osd", "pool", "create", "volumes", "128", "128", "replicated", "disks"], "stdout": "", "start": "2018-04-12 12:42:28.490659", "delta": "0:00:00.629999", "stderr": "Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)", "rc": 34, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "docker exec ceph_mon ceph osd pool create volumes 128 128 replicated disks", "removes": null, "warn": true, "chdir": null}}, "stdout_lines": [], "failed": true}
I have 3 controllers where monitors run and 3 osd nodes with 1 ceph disk per node.
Refer to https://docs.openstack.org/kolla-ansible/latest/reference/ceph-guide.html |
TASK [cinder : Creating ceph pool] *********************************************
fatal: [10.1.2.3]: FAILED! => {"_ansible_parsed": true, "stderr_lines": ["Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)"], "changed": false, "end": "2018-04-12 12:42:29.120658", "_ansible_no_log": false, "_ansible_delegated_vars": {"ansible_host": "10.196.244.201"}, "cmd": ["docker", "exec", "ceph_mon", "ceph", "osd", "pool", "create", "volumes", "128", "128", "replicated", "disks"], "stdout": "", "start": "2018-04-12 12:42:28.490659", "delta": "0:00:00.629999", "stderr": "Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)", "rc": 34, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "docker exec ceph_mon ceph osd pool create volumes 128 128 replicated disks", "removes": null, "warn": true, "chdir": null}}, "stdout_lines": [], "failed": true}
I have 3 controllers where monitors run and 3 osd nodes with 1 ceph disk per node.
Refer to https://docs.openstack.org/kolla-ansible/latest/reference/ceph-guide.html |
|