resource-type-list up to 10x slower in Mitaka than Liberty

Bug #1596645 reported by Matt Fischer
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Heat
Fix Released
Medium
Unassigned

Bug Description

What happened to resource-type-list in Mitaka? From L to M we see that it's up to 10x slower. The amount of resource types is about the same in Mitaka (actually it's smaller by about 150 bytes).

Liberty:
/v1/1ed9235b01b7498c8987f272d3f655f9/resource_types => generated 2795 bytes in 429 msecs

Mitaka:
/v1/1ed9235b01b7498c8987f272d3f655f9/resource_types => generated 2618 bytes in 4021 msecs

Here's an sample of the API call times, its not just a single call:

Liberty (in msecs):
411
599
435
456
581
483
386
454
366
687
429

Mitaka (in msecs):
4021
3923
4078
4207
3547
4144
3597
3954
3694
3999
4359
3943
3655

Revision history for this message
Thomas Herve (therve) wrote :

I can't reproduce on master, it seems to be fast for me. Does your environment have specific resources registered? It's possible there is an issue with custom resources.

Changed in heat:
importance: Undecided → Medium
milestone: none → next
status: New → Incomplete
Revision history for this message
Franciraldo Cavalcante (c-franciraldo-cavalcante) wrote :
Download full text (4.5 KiB)

Hi Thomas,

    I work with Matt Fischer. We're currently not using custom resources. Please find attached the results of the call to 'resource-type-list'.

    Please let us know if you have any ideas of how to debug the issue further.
# time heat resource-type-list
+------------------------------------------+
| resource_type |
+------------------------------------------+
| AWS::AutoScaling::AutoScalingGroup |
| AWS::AutoScaling::LaunchConfiguration |
| AWS::AutoScaling::ScalingPolicy |
| AWS::CloudFormation::Stack |
| AWS::CloudFormation::WaitCondition |
| AWS::CloudFormation::WaitConditionHandle |
| AWS::CloudWatch::Alarm |
| AWS::EC2::EIP |
| AWS::EC2::EIPAssociation |
| AWS::EC2::Instance |
| AWS::EC2::InternetGateway |
| AWS::EC2::NetworkInterface |
| AWS::EC2::RouteTable |
| AWS::EC2::SecurityGroup |
| AWS::EC2::Subnet |
| AWS::EC2::SubnetRouteTableAssociation |
| AWS::EC2::VPC |
| AWS::EC2::VPCGatewayAttachment |
| AWS::EC2::Volume |
| AWS::EC2::VolumeAttachment |
| AWS::ElasticLoadBalancing::LoadBalancer |
| AWS::IAM::AccessKey |
| AWS::IAM::User |
| AWS::RDS::DBInstance |
| AWS::S3::Bucket |
| OS::Cinder::EncryptedVolumeType |
| OS::Cinder::Volume |
| OS::Cinder::VolumeAttachment |
| OS::Cinder::VolumeType |
| OS::Designate::Domain |
| OS::Designate::Record |
| OS::Glance::Image |
| OS::Heat::AccessPolicy |
| OS::Heat::AutoScalingGroup |
| OS::Heat::CloudConfig |
| OS::Heat::HARestarter |
| OS::Heat::InstanceGroup |
| OS::Heat::MultipartMime |
| OS::Heat::None |
| OS::Heat::RandomString |
| OS::Heat::ResourceChain |
| OS::Heat::ResourceGroup |
| OS::Heat::ScalingPolicy |
| OS::Heat::SoftwareComponent |
| OS::Heat::SoftwareConfig |
| OS::Heat::SoftwareDeployment |
| OS::Heat::SoftwareDeploymentGroup |
| OS::Heat::SoftwareDeployments |
| OS::Heat::Stack |
| OS::Heat::StructuredConfig |
| OS::Heat::StructuredDeployment |
| OS::Heat::StructuredDeploymentGroup |
| OS::Heat::StructuredDeployments |
| OS::Heat::SwiftSignal |
| OS::Heat::SwiftSignalHandle |
| OS::Heat::TestResource |
| OS::Heat::UpdateWaitConditionHandle |
| OS::Heat::WaitCondition |
| OS::Heat::WaitConditionHandle |
| OS::Keystone::Endpoint |
| OS::Keystone::Group |
| OS::Keystone::GroupRoleAssignment |
| OS::K...

Read more...

Changed in heat:
status: Incomplete → New
Revision history for this message
Jason Dunsmore (jasondunsmore) wrote :

I'm seeing it respond in 1.1 to 1.4 seconds with latest master.

Revision history for this message
Steve Baker (steve-stevebaker) wrote :

It could be that a keystone catalog is being unnecessarily fetched for every call to Resource.is_service_available - some investigation should confirm that.

Here would be a good place to start:
http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resource.py#n671

Revision history for this message
Steve Baker (steve-stevebaker) wrote :

Also its possible this is already fixed in git master by https://review.openstack.org/#/c/320504

Revision history for this message
Dmitriy (duvarenkov) wrote :

Tried with Mitaka and Liberty, got same time in both. About 1100-1700 ms

Revision history for this message
Zane Bitter (zaneb) wrote :

Sounds like this was fixed in Newton.

Changed in heat:
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.