In Juno, we see that cinder-volume with an RBD backend keeps opening connections to Ceph until it reaches the maximum number of opened files.
With lsof we see that it keeps opening new Ceph admin sockets (e.g. /var/run/ceph/ceph-client.cinder.21428.128790752.asok).
In the logs we constantly see this:
2015-04-21 08:57:06.913 21428 DEBUG cinder.manager [-] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python2.7/site-packages/cinder/manager.py:128
2015-04-21 08:57:06.918 21428 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:193
2015-04-21 08:57:06.919 21428 INFO cinder.volume.manager [-] Updating volume status
2015-04-21 08:57:06.919 21428 DEBUG cinder.volume.drivers.rbd [-] opening connection to ceph cluster (timeout=-1). _connect_to_rados /usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py:291
I assume you trigger this with simply creating a large number of volumes? Or is there something more to your scenario? I'll try to reproduce this myself, but any additional info you can give would be helpful. Thanks for the bug report!