cinder-storage-init container is failing
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
openstack-helm |
Expired
|
Undecided
|
Unassigned |
Bug Description
When deploying Cinder chart the deployement is failing due to error in pod cinder-
root@nation:
+ '[' xcinder.
++ mktemp --suffix .yaml
+ SECRET=
++ mktemp --suffix .keyring
+ KEYRING=
+ trap cleanup EXIT
+ set -ex
+ '[' xcinder.
+ ceph -s
cluster:
id: 9d7e8e24-
health: HEALTH_OK
services:
mon: 1 daemons, quorum nation
mgr: nation(active)
mds: cephfs-1/1/1 up {0=mds-
osd: 1 osds: 1 up, 1 in
rgw: 1 daemon active
data:
pools: 19 pools, 101 pgs
objects: 1.40 k objects, 534 MiB
usage: 120 GiB used, 1.7 TiB / 1.8 TiB avail
pgs: 101 active+clean
io:
client: 144 KiB/s wr, 0 op/s rd, 18 op/s wr
+ ensure_pool cinder.volumes 8 cinder-volume
+ ceph osd pool stats cinder.volumes
pool cinder.volumes id 19
nothing is going on
++ egrep -c 'mimic|luminous'
++ ceph tell 'osd.*' version
++ xargs echo
+ local test_version=1
+ [[ 1 -gt 0 ]]
+ ceph osd pool application enable cinder.volumes cinder-volume
enabled application 'cinder-volume' on pool 'cinder.volumes'
++ ceph osd pool get cinder.volumes nosizechange
++ cut -f2 -d:
++ tr -d '[:space:]'
+ size_protection
+ ceph osd pool set cinder.volumes nosizechange 0
set pool 19 nosizechange to 0
+ ceph osd pool set cinder.volumes size 1
set pool 19 size to 1
+ ceph osd pool set cinder.volumes nosizechange false
set pool 19 nosizechange to false
+ ceph osd pool set cinder.volumes crush_rule same_host
set pool 19 crush_rule to same_host
++ ceph auth get client.cinder
exported keyring for client.cinder
+ USERINFO=
key = AQA7NhJdqx7PBhA
caps mon = "profile rbd"
caps osd = "profile rbd"'
+ echo 'Cephx user client.cinder already exist.'
Cephx user client.cinder already exist.
Update its cephx caps
+ echo 'Update its cephx caps'
+ ceph auth caps client.cinder mon 'profile rbd' osd 'profile rbd'
updated caps for client.cinder
+ ceph auth get client.cinder -o /tmp/tmp.
exported keyring for client.cinder
++ sed -n 's/^[[:
++ base64 -w0
+ ENCODED_
+ cat
++ echo QVFBN05oSmRxeDd
+ kubectl apply --namespace openstack -f /tmp/tmp.
error: SchemaError(
+ cleanup
+ rm -f /tmp/tmp.
Deeping into the error it seems something related to /tmp/tmp.
apiVersion: v1
kind: Secret
metadata:
name: "cinder-
type: kubernetes.io/rbd
data:
key: QVFBN05oSmRxeDd
In my environment I encountered the same issue due to running a recent version of kubernetes (1.15.0)
It was the glance-storage job.
the storage-init jobs use the ceph-config-helper image which uses kubernetes client 1.10.3.
There has not been a new image pushed to docker hub in quite some time.
I believe the fix might be to build and push a ceph-config-helper with KUBE_VERSION set to the newer versions of kubernetes, and then select the image that corresponds to your deployment.