> Are the admin socket connections still open, (e.g. shown in lsof) or
> just the files left around? The latter would probably be a bug in
> librados.
>
> Reaching max open files is common for ceph because of the low default
> limits, and the way its network layer works. Increasing the limits via
> ulimit causes no harm and is generally recommended. This is done for
> ceph daemons by the init scripts. Cinder's init script could change this
> too.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1446682
>
> Title:
> cinder-volume keeps opening Ceph clients until the maximum number of
> opened files reached
>
> Status in Cinder:
> Incomplete
>
> Bug description:
> In Juno, we see that cinder-volume with an RBD backend keeps opening
> connections to Ceph until it reaches the maximum number of opened files.
> With lsof we see that it keeps opening new Ceph admin sockets (e.g.
> /var/run/ceph/ceph-client.cinder.21428.128790752.asok).
>
> In the logs we constantly see this:
>
> 2015-04-21 08:57:06.913 21428 DEBUG cinder.manager [-] Notifying
> Schedulers of capabilities ... _publish_service_capabilities
> /usr/lib/python2.7/site-packages/cinder/manager.py:128
> 2015-04-21 08:57:06.918 21428 DEBUG
> cinder.openstack.common.periodic_task [-] Running periodic task
> VolumeManager._report_driver_status run_periodic_tasks
> /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:193
> 2015-04-21 08:57:06.919 21428 INFO cinder.volume.manager [-] Updating
> volume status
> 2015-04-21 08:57:06.919 21428 DEBUG cinder.volume.drivers.rbd [-]
> opening connection to ceph cluster (timeout=-1). _connect_to_rados
> /usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py:291
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/cinder/+bug/1446682/+subscriptions
>
They are most definitely opened (i.e. shown in lsof).
Also, We've raised the limit to 100k (and it reached that too)
On Tue, Apr 21, 2015, 23:10 Josh Durgin <email address hidden> wrote:
> Are the admin socket connections still open, (e.g. shown in lsof) or /bugs.launchpad .net/bugs/ 1446682 ceph/ceph- client. cinder. 21428.128790752 .asok). service_ capabilities python2. 7/site- packages/ cinder/ manager. py:128 openstack. common. periodic_ task [-] Running periodic task _report_ driver_ status run_periodic_tasks python2. 7/site- packages/ cinder/ openstack/ common/ periodic_ task.py: 193 volume. manager [-] Updating volume. drivers. rbd [-] python2. 7/site- packages/ cinder/ volume/ drivers/ rbd.py: 291 /bugs.launchpad .net/cinder/ +bug/1446682/ +subscriptions
> just the files left around? The latter would probably be a bug in
> librados.
>
> Reaching max open files is common for ceph because of the low default
> limits, and the way its network layer works. Increasing the limits via
> ulimit causes no harm and is generally recommended. This is done for
> ceph daemons by the init scripts. Cinder's init script could change this
> too.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https:/
>
> Title:
> cinder-volume keeps opening Ceph clients until the maximum number of
> opened files reached
>
> Status in Cinder:
> Incomplete
>
> Bug description:
> In Juno, we see that cinder-volume with an RBD backend keeps opening
> connections to Ceph until it reaches the maximum number of opened files.
> With lsof we see that it keeps opening new Ceph admin sockets (e.g.
> /var/run/
>
> In the logs we constantly see this:
>
> 2015-04-21 08:57:06.913 21428 DEBUG cinder.manager [-] Notifying
> Schedulers of capabilities ... _publish_
> /usr/lib/
> 2015-04-21 08:57:06.918 21428 DEBUG
> cinder.
> VolumeManager.
> /usr/lib/
> 2015-04-21 08:57:06.919 21428 INFO cinder.
> volume status
> 2015-04-21 08:57:06.919 21428 DEBUG cinder.
> opening connection to ceph cluster (timeout=-1). _connect_to_rados
> /usr/lib/
>
> To manage notifications about this bug go to:
> https:/
>