Activity log for bug #1535375

Date Who What changed Old value New value Message
2016-01-18 16:25:47 Roman Podoliaka bug added bug
2016-01-18 16:25:53 Roman Podoliaka oslo.service: assignee Roman Podoliaka (rpodolyaka)
2016-01-18 16:25:58 Roman Podoliaka bug task added oslo.db
2016-01-18 16:26:12 Roman Podoliaka oslo.db: assignee Roman Podoliaka (rpodolyaka)
2016-01-18 16:26:14 Roman Podoliaka oslo.db: status New Confirmed
2016-01-18 16:26:16 Roman Podoliaka oslo.service: status New Confirmed
2016-01-18 16:26:24 Roman Podoliaka oslo.db: importance Undecided Medium
2016-01-18 16:26:27 Roman Podoliaka oslo.service: importance Undecided Medium
2016-01-18 16:27:22 Roman Podoliaka description In oslo.service we default to 1000 greenlets in the pool for a WSGI server: cfg.IntOpt('wsgi_default_pool_size', default=1000, help="Size of the pool of greenthreads used by wsgi") Which means up to $wsgi_default_pool_size HTTP requests will be processed concurrently by 1 fork of an OpenStack API service. It turned out that this does not play well with oslo.db defaults for a pool of DB connections, which default to SQLAlchemy settings, which only allow to have up to 5 connections in pool and 10 overflow (will be closed once returned to the pool). With such defaults it's easy to start seeing timeout errors (SQLAlchemy gives up waiting on a connection to be available from pool/overflow after 30s by default) in service logs even with moderate concurrency. For DB oriented services (most of OpenStack APIs) we should decrease the number of greenlets in pool and increase max_overflow value for DB connections. In oslo.service we default to 1000 greenlets in the pool for a WSGI server:    cfg.IntOpt('wsgi_default_pool_size',                          default=1000,                          help="Size of the pool of greenthreads used by wsgi") Which means up to $wsgi_default_pool_size HTTP requests will be processed concurrently by 1 fork of an OpenStack API service. It turned out that this does not play well with oslo.db defaults for a pool of DB connections, which default to SQLAlchemy settings, which only allow to have up to 5 connections in pool and 10 overflow (will be closed once returned to the pool). With such defaults it's easy to start seeing timeout errors (SQLAlchemy gives up waiting on a connection to be available from pool/overflow after 30s by default) in service logs even with moderate concurrency. For DB oriented services (most of OpenStack APIs) we should decrease the number of greenlets in pool and increase max_overflow value for DB connections. More context: http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html
2016-03-02 02:09:00 Davanum Srinivas (DIMS) oslo.service: status Confirmed Fix Released
2016-03-02 02:09:04 Davanum Srinivas (DIMS) oslo.db: status Confirmed Fix Released