Performance issue related to S3 API or/and Keystone

Bug #1570797 reported by Alexander Petrov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Status tracked in 10.0.x
10.0.x
Invalid
High
Alexander Petrov
8.0.x
Invalid
High
MOS Keystone
9.x
Invalid
High
Alexander Petrov

Bug Description

Detailed bug description:

Rally load tests related to S3 show the low performance probably closely connected to Keystone. It is necessary to find out the cause of that behavior. Here the link to a more detailed report.
https://docs.google.com/document/d/1I0djM15qHSX3gmIglJkWGL6vCcN8zjo86tE8q-PnjGM/edit

Description of the environment:

 MOS 8.0 with 6 nodes (CPU: 1 (1)HDD: 150.0 GB RAM: 3.0 GB)
 3 nodes - Controller
 3 nodes - Compute, Storage - Ceph OSD

tags: added: area-keystone keystone
Changed in mos:
milestone: none → 8.0-updates
Changed in mos:
assignee: nobody → MOS Keystone (mos-keystone)
importance: Undecided → High
description: updated
Revision history for this message
Bug Checker Bot (bug-checker) wrote : Autochecker

(This check performed automatically)
Please, make sure that bug description contains the following sections filled in with the appropriate data related to the bug you are describing:

actual result

expected result

steps to reproduce

For more detailed information on the contents of each of the listed sections see https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Here_is_how_you_file_a_bug

tags: added: need-info
Revision history for this message
Alexander Petrov (apetrov-n) wrote :

I have performed the simple test. Keystoneclient generates the concurrent queries from the one solely user (admin). It is only connection and no other operation. In Rally world this scenario looks like this:
 "KeystoneAPI.test_01": [
    {
      "runner": {
        "type": "constant",
        "concurrency": 600,
        "times": 4000
      }
]

Env:
3 controllers, 3 compute+storage
OpenStack Release: Mitaka on Ubuntu 14.04
Compute: QEMU
Network: Neutron with VLAN segmentation
Storage Backends:
   Ceph RBD for volumes (Cinder)
   Ceph RadosGW for objects (Swift API)
   Ceph RBD for ephemeral volumes (Nova)
   Ceph RBD for images (Glance)
Capacity: CPU (Cores): 3 (3) RAM 8.9 GB HDD 450.0 GB

The results of the test is shown here:
http://172.18.196.27/s3token/9.08.html#/KeystoneAPI.test_01/overview

In short words, I see that KS is not able to service (in that configuration) more ~500 connections.
I don’t know maybe it’s normal behavior for KS for such workload?
if some botlleneck exists then we should figure out where is the problem.

tags: added: ct1
Revision history for this message
Leontii Istomin (listomin) wrote :

It seems it's the same issue as described here https://bugs.launchpad.net/mos/+bug/1583095
The root cause is rsyslog configuration - https://bugs.launchpad.net/fuel/+bug/1580200
Please make sure that you have this fix applied https://review.openstack.org/#/c/319458/

Revision history for this message
Dina Belova (dbelova) wrote :

Alexander, due to the comment https://bugs.launchpad.net/mos/+bug/1570797/comments/3 it looks like you need to try to apply the needed fix on 8.0 (the fix was developed for 9.0). Please contact Maintenance team if bug https://bugs.launchpad.net/fuel/+bug/1580200 can be fixed in 8.0 as well.

Revision history for this message
Dina Belova (dbelova) wrote :

One more moment - is it possible for you to reproduce the same issue on 9.0 ISO with https://review.openstack.org/#/c/319458/ fix included?

Revision history for this message
Dina Belova (dbelova) wrote :

Marking Invalid for 9.0 (at least for now) due to the fact, that there is a great confidence from Keystone team that https://review.openstack.org/#/c/319458/ should make this issue gone in MOS 9.0. Alex will recheck it additionally against 9.0 after June 14th. Also he'll communicate with maintenance team if this can be backported to 8.0 as well.

Revision history for this message
Dina Belova (dbelova) wrote :

Due to my previous comment, I'm marking this bug as confirmed (unless we'll prove it's not) for 8.0 and invalid due to the same fix in 10.0

Revision history for this message
Rodion Tikunov (rtikunov) wrote :

Code of 8.0 doesn't have a lines which should be deleted by patch [0].
As per [1] it is dangerous to backport other patches.

[0] https://review.openstack.org/#/c/319458/
[1] https://bugs.launchpad.net/fuel/+bug/1580200/comments/18

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.