This issue is now being seen in the perf lab (all baremetal – 32 cores, 256 GB RAM, SSD+SAN mixed storage). Looks like Keystone is messing up with environment by hogging the CPU – might be similar to this defect: https://bugs.launchpad.net/keystone/+bug/1182481
Note: My DB is postgres with buffer (256 MB) and max_connections (800) are set optimally. There is no page out.
>> Switching from Overview to Instances( with only 14 instances running)
2013-05-28 18:12:21.005 37206 INFO nova.osapi_compute.wsgi.server [-] (37206) accepted ('10.1.56.12', 52968)
>>Switching from Instances to Volume (with only 1 volume)
2013-05-28 18:16:30.220 37206 INFO nova.osapi_compute.wsgi.server [-] (37206) accepted ('10.1.56.12', 53243)
This issue is now being seen in the perf lab (all baremetal – 32 cores, 256 GB RAM, SSD+SAN mixed storage). Looks like Keystone is messing up with environment by hogging the CPU – might be similar to this defect: /bugs.launchpad .net/keystone/ +bug/1182481
https:/
Note: My DB is postgres with buffer (256 MB) and max_connections (800) are set optimally. There is no page out.
Keystone version: 1-0ubuntu1. 1~cloud0 1-0ubuntu1. 1~cloud0) , adduser, ssl-cert (>= 1.0.12)
Version: 1:2013.
Depends: python, upstart-job, python-keystone (= 1:2013.
A typical response time (in seconds) from nova-api log shows:
>> Switching from Admin tab to Project tab compute. wsgi.server [-] (37206) accepted ('10.1.56.12', 52908)
2013-05-28 18:12:08.932 37206 INFO nova.osapi_
2013-05-28 18:12:14.679 INFO nova.osapi_ compute. wsgi.server [req-62e65521- d641-4c8f- a89d-fef317d1be 8a 2efad4b253f64b4 dae65a28f45438d 93 4f342d62fff843f ab63dc03316d342 51] 10.1.56.12 "GET /v2/4f342d62fff 843fab63dc03316 d34251/ servers/ detail? project_ id=4f342d62fff8 43fab63dc03316d 34251 HTTP/1.1" status: 200 len: 22682 time: 5.7459810
>> Switching from Overview to Instances( with only 14 instances running) compute. wsgi.server [-] (37206) accepted ('10.1.56.12', 52968)
2013-05-28 18:12:21.005 37206 INFO nova.osapi_
2013-05-28 18:12:28.297 INFO nova.osapi_ compute. wsgi.server [req-8087be8b- b44c-45ab- a4bc-9572c0984e 65 2efad4b253f64b4 dae65a28f45438d 93 4f342d62fff843f ab63dc03316d342 51] 10.1.56.12 "GET /v2/4f342d62fff 843fab63dc03316 d34251/ servers/ detail? project_ id=4f342d62fff8 43fab63dc03316d 34251 HTTP/1.1" status: 200 len: 22682 time: 7.2911589
>>Switching from Instances to Volume (with only 1 volume) compute. wsgi.server [-] (37206) accepted ('10.1.56.12', 53243)
2013-05-28 18:16:30.220 37206 INFO nova.osapi_
2013-05-28 18:16:38.533 INFO nova.osapi_ compute. wsgi.server [req-729f3817- 0362-4de1- a362-ce3ee856fa ca 2efad4b253f64b4 dae65a28f45438d 93 4f342d62fff843f ab63dc03316d342 51] 10.1.56.12 "GET /v2/4f342d62fff 843fab63dc03316 d34251/ servers/ detail? project_ id=4f342d62fff8 43fab63dc03316d 34251 HTTP/1.1" status: 200 len: 22682 time: 8.3117819
I will be investigating more and keep you posted. In the meantime, if someone has any clues – let me know.