cinder-storage-init container is failing

Bug #1834295 reported by Alessio
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
openstack-helm
Expired
Undecided
Unassigned

Bug Description

When deploying Cinder chart the deployement is failing due to error in pod cinder-storage-ini-XXX:

root@nation:/home/nation-user/openstack-helm# kubectl logs cinder-storage-init-4fmxq -n openstack
+ '[' xcinder.volume.drivers.rbd.RBDDriver == xcinder.volume.drivers.rbd.RBDDriver ']'
++ mktemp --suffix .yaml
+ SECRET=/tmp/tmp.mTcTHaxwx1.yaml
++ mktemp --suffix .keyring
+ KEYRING=/tmp/tmp.hJIbVloYtI.keyring
+ trap cleanup EXIT
+ set -ex
+ '[' xcinder.volume.drivers.rbd.RBDDriver == xcinder.volume.drivers.rbd.RBDDriver ']'
+ ceph -s
  cluster:
    id: 9d7e8e24-4086-489a-80e7-697dae4795dc
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum nation
    mgr: nation(active)
    mds: cephfs-1/1/1 up {0=mds-ceph-mds-bc8899bbc-rghzm=up:active}
    osd: 1 osds: 1 up, 1 in
    rgw: 1 daemon active

  data:
    pools: 19 pools, 101 pgs
    objects: 1.40 k objects, 534 MiB
    usage: 120 GiB used, 1.7 TiB / 1.8 TiB avail
    pgs: 101 active+clean

  io:
    client: 144 KiB/s wr, 0 op/s rd, 18 op/s wr

+ ensure_pool cinder.volumes 8 cinder-volume
+ ceph osd pool stats cinder.volumes
pool cinder.volumes id 19
  nothing is going on

++ egrep -c 'mimic|luminous'
++ ceph tell 'osd.*' version
++ xargs echo
+ local test_version=1
+ [[ 1 -gt 0 ]]
+ ceph osd pool application enable cinder.volumes cinder-volume
enabled application 'cinder-volume' on pool 'cinder.volumes'
++ ceph osd pool get cinder.volumes nosizechange
++ cut -f2 -d:
++ tr -d '[:space:]'
+ size_protection=false
+ ceph osd pool set cinder.volumes nosizechange 0
set pool 19 nosizechange to 0
+ ceph osd pool set cinder.volumes size 1
set pool 19 size to 1
+ ceph osd pool set cinder.volumes nosizechange false
set pool 19 nosizechange to false
+ ceph osd pool set cinder.volumes crush_rule same_host
set pool 19 crush_rule to same_host
++ ceph auth get client.cinder
exported keyring for client.cinder
+ USERINFO='[client.cinder]
        key = AQA7NhJdqx7PBhAAeZx03gKiUs4VvCKnOi7DTg==
        caps mon = "profile rbd"
        caps osd = "profile rbd"'
+ echo 'Cephx user client.cinder already exist.'
Cephx user client.cinder already exist.
Update its cephx caps
+ echo 'Update its cephx caps'
+ ceph auth caps client.cinder mon 'profile rbd' osd 'profile rbd'
updated caps for client.cinder
+ ceph auth get client.cinder -o /tmp/tmp.hJIbVloYtI.keyring
exported keyring for client.cinder
++ sed -n 's/^[[:blank:]]*key[[:blank:]]\+=[[:blank:]]\(.*\)/\1/p' /tmp/tmp.hJIbVloYtI.keyring
++ base64 -w0
+ ENCODED_KEYRING=QVFBN05oSmRxeDdQQmhBQWVaeDAzZ0tpVXM0VnZDS25PaTdEVGc9PQo=
+ cat
++ echo QVFBN05oSmRxeDdQQmhBQWVaeDAzZ0tpVXM0VnZDS25PaTdEVGc9PQo=
+ kubectl apply --namespace openstack -f /tmp/tmp.mTcTHaxwx1.yaml
error: SchemaError(io.k8s.api.apps.v1beta2.Deployment): invalid object doesn't have additional properties
+ cleanup
+ rm -f /tmp/tmp.mTcTHaxwx1.yaml /tmp/tmp.hJIbVloYtI.keyring

Deeping into the error it seems something related to /tmp/tmp.mTcTHaxwx1.yaml that is as following:

apiVersion: v1
kind: Secret
metadata:
  name: "cinder-volume-rbd-keyring"
type: kubernetes.io/rbd
data:
  key: QVFBN05oSmRxeDdQQmhBQWVaeDAzZ0tpVXM0VnZDS25PaTdEVGc9PQo=

Revision history for this message
Al Bailey (albailey1974) wrote :

In my environment I encountered the same issue due to running a recent version of kubernetes (1.15.0)
It was the glance-storage job.

the storage-init jobs use the ceph-config-helper image which uses kubernetes client 1.10.3.
There has not been a new image pushed to docker hub in quite some time.

I believe the fix might be to build and push a ceph-config-helper with KUBE_VERSION set to the newer versions of kubernetes, and then select the image that corresponds to your deployment.

Revision history for this message
Gage Hugo (gagehugo) wrote :

Are there any updates to this?

Changed in openstack-helm:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for openstack-helm because there has been no activity for 60 days.]

Changed in openstack-helm:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.