Authorization Failed: Bad Gateway (HTTP 502) when executing 'fuel node'

Bug #1572498 reported by Marcin Iwinski
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
In Progress
Medium
Bartłomiej Piotrowski
7.0.x
Won't Fix
Medium
Fuel Sustaining
8.0.x
Won't Fix
Medium
Fuel Sustaining
Mitaka
Won't Fix
Medium
Fuel Sustaining

Bug Description

Detailed bug description:
  When operating with fuel CLI from time to time the command fails to authenticate. For example fuel node returns the following trace:
 Traceback (most recent call last):
  File "/usr/bin/fuel", line 10, in <module>
    sys.exit(main())
  File "/usr/lib/python2.7/site-packages/fuelclient/cli/error.py", line 115, in wrapper
    return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/fuelclient/cli/parser.py", line 266, in main
    parser.parse()
  File "/usr/lib/python2.7/site-packages/fuelclient/cli/parser.py", line 143, in parse
    actions[parsed_params.action].action_func(parsed_params)
  File "/usr/lib/python2.7/site-packages/fuelclient/cli/actions/base.py", line 62, in action_func
    method(params)
  File "/usr/lib/python2.7/site-packages/fuelclient/cli/actions/node.py", line 299, in list
    node_collection = NodeCollection.get_all()
  File "/usr/lib/python2.7/site-packages/fuelclient/objects/node.py", line 172, in get_all
    return cls(Node.get_all())
  File "/usr/lib/python2.7/site-packages/fuelclient/objects/base.py", line 68, in get_all
    return map(cls.init_with_data, cls.get_all_data())
  File "/usr/lib/python2.7/site-packages/fuelclient/objects/base.py", line 64, in get_all_data
    return cls.connection.get_request(cls.class_api_path)
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 191, in get_request
    resp = self.get_request_raw(api, ostf, params)
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 184, in get_request_raw
    return self.session.get(url, params=params)
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 95, in session
    self._session = self._make_session()
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 77, in _make_session
    session.headers.update(self._make_common_headers())
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 58, in _make_common_headers
    'X-Auth-Token': self.auth_token}
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 102, in auth_token
    if not self.keystone_client.auth_token:
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 120, in keystone_client
    self.initialize_keystone_client()
  File "/usr/lib/python2.7/site-packages/fuelclient/client.py", line 134, in initialize_keystone_client
    tenant_name=self.tenant)
  File "/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py", line 166, in __init__
    self.authenticate()
  File "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner
    return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py", line 589, in authenticate
    resp = self.get_raw_token_from_identity_service(**kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py", line 210, in get_raw_token_from_identity_service
    _("Authorization Failed: %s") % e)
keystoneclient.exceptions.AuthorizationFailure: Authorization Failed: Bad Gateway (HTTP 502)

at the time of above error In keystone logs (/var/log/docker-logs/keystone/keystone-all.log) i see the following trace:
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi [req-d94b8fd5-fe15-4f5d-b33e-fadd8b278890 - - - - -] (psycopg2.OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi Traceback (most recent call last):
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 248, in __call__
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi result = method(context, **params)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/token/controllers.py", line 102, in authenticate
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi context, auth)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/token/controllers.py", line 295, in _authenticate_local
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi username, CONF.identity.default_domain_id)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 433, in wrapper
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return f(self, *args, **kwargs)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 444, in wrapper
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return f(self, *args, **kwargs)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1040, in decorate
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi should_cache_fn)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 651, in get_or_create
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi async_creator) as value:
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 158, in __enter__
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return self._enter()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 98, in _enter
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi generated = self._enter_create(createdtime)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 149, in _enter_create
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi created = self.creator()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 619, in gen_value
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi created_value = creator()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1036, in creator
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return fn(*arg, **kw)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 868, in get_user_by_name
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi ref = driver.get_user_by_name(user_name, domain_id)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 138, in get_user_by_name
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi session = sql.get_session()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/sql/core.py", line 192, in get_session
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return _get_engine_facade().get_session(expire_on_commit=expire_on_commit)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/sql/core.py", line 176, in _get_engine_facade
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi _engine_facade = db_session.EngineFacade.from_config(CONF)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1015, in from_config
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi expire_on_commit=expire_on_commit, _conf=conf)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 943, in __init__
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi slave_connection=slave_connection)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 338, in _start
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi engine_args, maker_args)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 362, in _setup_for_connection
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi sql_connection=sql_connection, **engine_kwargs)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 152, in create_engine
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi test_conn = _test_connection(engine, max_retries, retry_interval)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 326, in _test_connection
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return engine.connect()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2018, in connect
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return self._connection_cls(self, **kwargs)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 72, in __init__
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi if connection is not None else engine.raw_connection()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2104, in raw_connection
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi self.pool.unique_connection, _connection)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2078, in _wrap_pool_connect
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi e, dialect, self)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1401, in _handle_dbapi_exception_noconnection
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi util.raise_from_cause(newraise, exc_info)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi reraise(type(exception), exception, tb=exc_tb)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2074, in _wrap_pool_connect
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return fn()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in unique_connection
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return _ConnectionFairy._checkout(self)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 713, in _checkout
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi fairy = _ConnectionRecord.checkout(pool)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 480, in checkout
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi rec = pool._do_get()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 1060, in _do_get
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi self._dec_overflow()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi compat.reraise(exc_type, exc_value, exc_tb)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 1057, in _do_get
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return self._create_connection()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 323, in _create_connection
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return _ConnectionRecord(self)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 449, in __init__
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi self.connection = self.__connect()
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 607, in __connect
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi connection = self.__pool._invoke_creator(self)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 97, in connect
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return dialect.connect(*cargs, **cparams)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 385, in connect
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi return self.dbapi.connect(*cargs, **cparams)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi conn = _connect(dsn, connection_factory=connection_factory, async=async)
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi OperationalError: (psycopg2.OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi
2016-04-20 09:06:00.785 11406 ERROR keystone.common.wsgi

the actual FATAL error is also reported in postrges logs (/var/lib/pgsql/data/pg_log/postgresql-Wed.log):
....
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections

OSTF is also not working showing "OSTF server is not available." where in fact docker container with OSTF services is running:
[root@fuel ~]# dockerctl shell ostf ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 53984 3632 ? Ss Apr17 0:00 /usr/sbin/init
dbus 34 0.0 0.0 26428 1444 ? Ss Apr17 0:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 62 0.1 0.0 4336 668 ? Ss Apr17 5:55 /usr/sbin/acpid
root 487 0.0 0.0 418432 90656 ? Ssl Apr17 0:02 /usr/bin/python /usr/bin/ostf-server
root 564 0.0 0.0 19764 1232 ? Rs+ 10:10 0:00 ps aux

Every time the OSTF page is refreshed in Fuel WebUI new FATAL messages appear in postgres logs.

Postgres is reaching it's configured limit of connections:
[root@fuel ~]# dockerctl shell postgres [root@fuel /]# grep max_connections /var/lib/pgsql/data/postgresql.conf
max_connections = 100 # (change requires restart)
# Note: Increasing max_connections costs ~400 bytes of shared memory per
# max_locks_per_transaction * (max_connections + max_prepared_transactions)env: [root@fuel: No such file or directory

[root@fuel ~]# dockerctl shell postgres sudo -u postgres psql -c "SELECT count(*) FROM pg_stat_activity;"
 count
-------
    98
(1 row)

where to majority of the existing connections are from keystone:
[root@fuel ~]# dockerctl shell postgres sudo -u postgres psql -c "SELECT datname,count(datname) FROM pg_stat_activity group by datname;"
 datname | count
----------+-------
 keystone | 83
 postgres | 1
 nailgun | 13
 ostf | 1
(4 rows)

Steps to reproduce:
  on Fuel Master execute "fuel node" multiple times or alternatively open OSTF page in Fuel UI

Expected results:
  'fuel node' command should always work, OSTF tests should be displayed

Actual result:
  some of "fuel node" command execution traces out with a trace pasted above, OSTF page is showing an error

Workaround:
  restart postgresql
  dockerctl shell postgres service postgresql restart

  Once postgres is restarted the active connections looks much better:
root@fuel ~]# dockerctl shell postgres sudo -u postgres psql -c "SELECT datname,count(datname) FROM pg_stat_activity group by datname;"
 datname | count
----------+-------
 keystone | 5
 postgres | 1
 nailgun | 4
(3 rows)

OSTF UI opens just fine, however number of sessions from keystone are increasing with every execution of "fuel node".

Impact:
  High, health check is not possible therefore the entire QA tests cannot happen.

Description of the environment:
Fuel is running on a physical host.
One environment (18 nodes in total, 9 physical, 9VMs) is already deployed. VMs were created using reduced footprint feature [1].
Advanced network template is also used for the deployed environment.

Version of components:
VERSION:
  feature_groups:
    - mirantis
  production: "docker"
  release: "8.0"
  api: "1.0"
  build_number: "570"
  build_id: "570"
  fuel-nailgun_sha: "558ca91a854cf29e395940c232911ffb851899c1"
  python-fuelclient_sha: "4f234669cfe88a9406f4e438b1e1f74f1ef484a5"
  fuel-agent_sha: "658be72c4b42d3e1436b86ac4567ab914bfb451b"
  fuel-nailgun-agent_sha: "b2bb466fd5bd92da614cdbd819d6999c510ebfb1"
  astute_sha: "b81577a5b7857c4be8748492bae1dec2fa89b446"
  fuel-library_sha: "c2a335b5b725f1b994f78d4c78723d29fa44685a"
  fuel-ostf_sha: "3bc76a63a9e7d195ff34eadc29552f4235fa6c52"
  fuel-mirror_sha: "fb45b80d7bee5899d931f926e5c9512e2b442749"
  fuelmenu_sha: "78ffc73065a9674b707c081d128cb7eea611474f"
  shotgun_sha: "63645dea384a37dde5c01d4f8905566978e5d906"
  network-checker_sha: "a43cf96cd9532f10794dce736350bf5bed350e9d"
  fuel-upgrade_sha: "616a7490ec7199f69759e97e42f9b97dfc87e85b"
  fuelmain_sha: "d605bcbabf315382d56d0ce8143458be67c53434"
Network model:
Neutron with VXLANs

Additional Information:
Diagnostic snapshot can be provided privately to an engineer working on this bug since it might contain customer specific data.

[1] reduced footprint feature: https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#reduced-footprint-ops

Dmitry Pyzhov (dpyzhov)
Changed in fuel:
assignee: nobody → MOS Keystone (mos-keystone)
Revision history for this message
Boris Bobrov (bbobrov) wrote :

Well yes, is seems that keystone needs more connections to postgresql. Please increase the number of allowed connections.

Changed in fuel:
assignee: MOS Keystone (mos-keystone) → MOS Linux (mos-linux)
Revision history for this message
Marcin Iwinski (iwi) wrote :

I already did, but like i said in the bug description - after restarting postgres, every new execution of "fuel node" command increased the number of connections still being opened so increasing max connections on postgres side only postpones the re-occurrence of this error.
Also - if we know that keystone needs more connections will the max_connections parameter be updated in future releases of fuel?

Revision history for this message
Ivan Suzdal (isuzdal) wrote :

I presume it should be changed via puppet manifests, so transferring to puppet team

Changed in fuel:
assignee: MOS Linux (mos-linux) → MOS Puppet Team (mos-puppet)
Revision history for this message
Ivan Berezovskiy (iberezovskiy) wrote :

@Boris, what's the value that are you expecting to have in number of allowed connections?

Changed in fuel:
assignee: MOS Puppet Team (mos-puppet) → MOS Keystone (mos-keystone)
milestone: none → 8.0-updates
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
Boris Bobrov (bbobrov) wrote :

Please talk to Vladimir Kuklin or Sergii Golovatiuk. I know that they had a similar issue with MySQL and had to tune it.

I also don't quite get why postgresql is used. Is it the default in MOS 8.0?

Changed in fuel:
assignee: MOS Keystone (mos-keystone) → Boris Bobrov (bbobrov)
assignee: Boris Bobrov (bbobrov) → MOS Puppet Team (mos-puppet)
Revision history for this message
Boris Bobrov (bbobrov) wrote :

No, i was wrong above. I'll investigate it now.

Changed in fuel:
assignee: MOS Puppet Team (mos-puppet) → Boris Bobrov (bbobrov)
Revision history for this message
Boris Bobrov (bbobrov) wrote :

SQLAlchemy has pool_size = 5 by default. Fuel node had 48 cores, so there were 96 instances of keystone. Number of connections could be 48 * 2 * 5 = 480.

Since the limit in postgresql is 100 connections, the number of keystone instances should be lowered. 5 public and 5 admin would be enough, because it will take 50 connections and we need 50 for other services.

Fuel people, could you please set public_workers=5 and admin_workers=5 for keystone on the admin node?

Changed in fuel:
assignee: Boris Bobrov (bbobrov) → Fuel Library Team (fuel-library)
Marcin Iwinski (iwi)
tags: added: customer-found
Revision history for this message
Bartłomiej Piotrowski (bpiotrowski) wrote :

Assigned to all applicable branches + Sustaining team.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to fuel-library (master)

Fix proposed to branch: master
Review: https://review.openstack.org/314107

Changed in fuel:
assignee: Fuel Sustaining (fuel-sustaining-team) → Bartłomiej Piotrowski (bpiotrowski)
status: Confirmed → In Progress
Revision history for this message
Maksim Malchuk (mmalchuk) wrote :

Won't fix medium bug in backports because SCF passed.

Changed in fuel:
assignee: Bartłomiej Piotrowski (bpiotrowski) → Maksim Malchuk (mmalchuk)
Changed in fuel:
assignee: Maksim Malchuk (mmalchuk) → Bartłomiej Piotrowski (bpiotrowski)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on fuel-library (master)

Change abandoned by Fuel DevOps Robot (<email address hidden>) on branch: master
Review: https://review.openstack.org/314107
Reason: This review is > 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

no longer affects: fuel/newton
Dmitry Pyzhov (dpyzhov)
Changed in fuel:
milestone: 10.0 → 10.1
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.