retrieve ip address of servers Heat Autoscaling

Bug #1439213 reported by binou
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Heat
Invalid
Undecided
Unassigned

Bug Description

Hi,
I am testing autoscaling with heat and ceilometer.
so i want to use the IP address and the floating IP address of each server to execute a script in the user_data
i tried this : outputs_list:
    value: {'get_attr': [dbserver_group, outputs_list , networks, binetou_internal_net, 0]}
in the outputs section but that does nto work. H ow can i do please?
here is my template.

heat_template_version: 2013-05-23

description: Basic (1 server)

resources:
   dbserver_group:
     type: OS::Heat::AutoScalingGroup
     properties:
       min_size: 1
       max_size: 3
       resource:
         type: http://10.236.242.183/test.yaml
         properties:
           key_name: binetou
           image: cassandra2
           flavor: m1.medium
           pool_id: {get_resource: pool}
           metadata: {"metering.stack": {get_param: "OS::stack_id"}}
           user_data:
             str_replace:
              template: |
                #!/bin/bash
                sh /home/osadmin/scriptconf.sh $address $adfloat # THE ADDRESSES OF EACH SERVER

              params:
               $address: {get_attr: [lb_vip_port, fixed_ips, 0, ip_address]}
               $adfloat: { get_attr: [lb_vip_floating_ip, floating_ip_address] }

   dbserver_scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: dbserver_group}
      cooldown: 60
      scaling_adjustment: 1

   dbserver_scaledown_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: dbserver_group}
      cooldown: 60
      scaling_adjustment: -1

   cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-up if the average CPU > 50% for 1 minute
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 25
      alarm_actions:
        - {get_attr: [dbserver_scaleup_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: gt

   cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-down if the average CPU < 15% for 10 minutes
      meter_name: cpu_util
      statistic: avg
      period: 600
      evaluation_periods: 1
      threshold: 15
      alarm_actions:
        - {get_attr: [dbserver_scaledown_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: lt

   lb_vip_port:
    type: OS::Neutron::Port
    properties:
      network_id: "c5bcedd1-4a7e-4c4b-91ea-7600d21d03b2"
      fixed_ips:
        - subnet_id: "bfb0153d-d569-4f84-a29b-3783709e107a"
      security_groups: [binetou_securityGroup]

   lb_vip_floating_ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: bd3ab98f-f2fd-464a-ba95-39af51054769
      port_id: { get_resource: lb_vip_port }

   lb_pool_vip:
    type: OS::Neutron::FloatingIPAssociation
    properties:
      floatingip_id: { get_resource: lb_vip_floating_ip }
      port_id: { 'Fn::Select': ['port_id', {get_attr: [pool, vip]}]}

   monitor:
    type: OS::Neutron::HealthMonitor
    properties:
      type: TCP
      delay: 3
      max_retries: 5
      timeout: 5

   pool:
    type: OS::Neutron::Pool
    properties:
      protocol: HTTP
      monitors: [{get_resource: monitor}]
      subnet_id: "bfb0153d-d569-4f84-a29b-3783709e107a"
      lb_method: ROUND_ROBIN
      vip:
        protocol_port: 80

   member:
    type: OS::Neutron::PoolMember
    properties:
      pool_id: {get_resource: pool}
      address: {get_attr: [lb_vip_port, fixed_ips, 0, ip_address]}
      protocol_port: 80

   loadbalancer:
    type: OS::Neutron::LoadBalancer
    properties:
      protocol_port: 80
      pool_id: {get_resource: pool}

outputs:
  scale_up_url:
    description: >
      This URL is the webhook to scale up the autoscaling group. You
      can invoke the scale-up operation by doing an HTTP POST to this
      URL; no body nor extra headers are needed.
      value: {get_attr: [ dbserver_scaleup_policy, alarm_url]}

  scale_dn_url:
    description: >
      This URL is the webhook to scale down the autoscaling group.
      You can invoke the scale-down operation by doing an HTTP POST to
      this URL; no body nor extra headers are needed.
    value: {get_attr: [ dbserver_scaledown_policy, alarm_url]}

  pool_ip_address:
    value: {get_attr: [pool, vip, address]}
    description: The IP address of the load balancing pool

  WebsiteURL:
    description: URL for dbserver
    value:
      str_replace:
        template: http://host/dbserver
        params:
          host: { get_attr: [lb_vip_floating_ip, floating_ip_address] }

Revision history for this message
Miguel Grinberg (miguelgrinberg) wrote :

Where in your template are you inserting the output_list expression?

Also note that if you were hoping outputs_list will update its values when the autoscaling group grows or shrinks, that is currently broken. I have filed a bug on that: https://bugs.launchpad.net/heat/+bug/1437524 and am currently working on a solution.

Revision history for this message
binou (bintou-16) wrote :

i have added in the output this line

outputs_list:
        value: { get_attr: [dbserver_group, outputs_list, first_address] }

so when i do in the user_data params:
               $address: {get_attr: [lb_vip_port, fixed_ips, 0, ip_address]}
               $adfloat: { get_attr: [lb_vip_floating_ip, floating_ip]}

  $address is not the ip adresse of the VM but a other ip address (often 2 address far)

for exemple when i do ifconfig ,i get 10.100.0.77 ,but $address is 10.100.0.79
$adflaot is good for the moment.

how can i do to get the good private adress

Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

I'm kind of puzzled here - it seems you want to pass the ip adresses of VMs into the user data "before" VMs are created and their fixed IPs are actually known. That surely won't work.

Revision history for this message
binou (bintou-16) wrote :

i firstly create a template with cassandra without autoscaling and that use the ip adresses (private and floating) of server and that worked.
i juste want to do the same with autoscaling heat/ceilometer (there is a loadbalancer) please help.

here is my first template

heat_template_version: 2013-05-23

description: Basic (1 server)

resources:
  server1:
    type: OS::Nova::Server
    properties:
      name: "server2"
      key_name: binetou
      image: cassandra
      flavor: m1.medium
      networks:
        - port: { get_resource: server1_port }
      user_data:
         str_replace:
          template: |
            #!/bin/bash
            sh /home/osadmin/scriptconf.sh $address $adfloat

          params:
           $address: {get_attr: [server1_port, fixed_ips, 0, ip_address]}
           $adfloat: {get_attr: [server1_floating_ip, floating_ip_address]}

  server1_port:
    type: OS::Neutron::Port
    properties:
      network_id: "c5bcedd1-4a7e-4c4b-91ea-7600d21d03b2"
      fixed_ips:
        - subnet_id: "bfb0153d-d569-4f84-a29b-3783709e107a"
      security_groups: [binetou_securityGroup]

  server1_floating_ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: bd3ab98f-f2fd-464a-ba95-39af51054769
      port_id: { get_resource: server1_port }

outputs:
  server1_private_ip:
     description: IP address of server1 in private network
     value: { get_attr: [ server1, first_address ] }

  server1_public_ip:
    description: Floating IP address of server1 in public network
    value: { get_attr: [ server1_floating_ip, floating_ip_address ] }

Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

you can try putting the server + port + floating IP into the nested stack and scale those. probably that might help as an inspiration - the templates for our heat integration tests that use an autoscaled webapp behind a load balancer

https://review.openstack.org/#/c/165944/

Revision history for this message
binou (bintou-16) wrote :

do you meen reusing the first template? how can i add loadbalancer in this cas.
can you be more explicit? .i am a new user of openstack

Revision history for this message
binou (bintou-16) wrote :

i tried this tgis template and it seems to wrok even if there is no loadbalancer.
So i would like to test the autoscaling ,so how can i increase on purpose the cpu of the VM ?

here is my new template
heat_template_version: 2013-05-23

description: Basic (1 server)

resources:
   dbserver_group:
     type: OS::Heat::AutoScalingGroup
     properties:
       min_size: 1
       max_size: 3
       resource:
         type: OS::Nova::Server
         properties:
           key_name: binetou
           image: cassandra2
           flavor: m1.medium
           networks:
             - port: { get_resource: server1_port }
           metadata: {"metering.stack": {get_param: "OS::stack_id"}}
           user_data:
             str_replace:
              template: |
                #!/bin/bash
                #var=$(expr $address + 2)
                sh /home/osadmin/scriptconf.sh $address $adfloat

              params:
               $address: {get_attr: [server1_port, fixed_ips, 0, ip_address]}
               $adfloat: {get_attr: [server1_floating_ip, floating_ip_address]}

   server1_port:
     type: OS::Neutron::Port
     properties:
       network_id: "c5bcedd1-4a7e-4c4b-91ea-7600d21d03b2"
       fixed_ips:
         - subnet_id: "bfb0153d-d569-4f84-a29b-3783709e107a"
       security_groups: [binetou_securityGroup]

   server1_floating_ip:
     type: OS::Neutron::FloatingIP
     properties:
       floating_network_id: bd3ab98f-f2fd-464a-ba95-39af51054769
       port_id: { get_resource: server1_port }

   dbserver_scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: dbserver_group}
      cooldown: 60
      scaling_adjustment: 1

   dbserver_scaledown_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: dbserver_group}
      cooldown: 60
      scaling_adjustment: -1

   cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-up if the average CPU > 50% for 1 minute
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 8
      alarm_actions:
        - {get_attr: [dbserver_scaleup_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: gt

   cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-down if the average CPU < 15% for 10 minutes
      meter_name: cpu_util
      statistic: avg
      period: 600
      evaluation_periods: 1
      threshold: 2
      alarm_actions:
        - {get_attr: [dbserver_scaledown_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: lt

outputs:
  server1_private_ip:
     description: IP address of server1 in private network
     value: { get_attr: [ server1, first_address ] }

  server1_public_ip:
    description: Floating IP address of server1 in public network
    value: { get_attr: [ server1_floating_ip, floating_ip_address ] }

Revision history for this message
Marcin Zbik (zbikmarc+launchpad) wrote :

@binou
Try using "dd if=/dev/zero of=/dev/null" in VM. Or if you dont need to check autoscaling as it is, just send POST to url from scaling policy and it will scale up or down. You can retrieve this url from ceilometer alarm.

Revision history for this message
Rabi Mishra (rabi) wrote :

@binou

Path based attributes are only supported for template version 2014-10-16 or higher and doesn't work with '2013-05-23', which is the case for your template.

There is an example on how to use them.

https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml#L106

Changed in heat:
status: New → Incomplete
Revision history for this message
binou (bintou-16) wrote :

OK THANK

I just have a question about autoscaling in general.
What I am doing is mostly about unpredicated demand (so for example VM will be created immedaitely when need)

How about predicated demand after a capacity planning. for exemple : betwen 6 am and 7pm i need maximum 100VM
and between 7pm to 2am I want to use only 20 VM like a curve.

Is this functionality implemented in openstack?

Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

binou, most OpenStack-y approach would be to use Mistral. a right-away approach is to have a machine (e.g. a VM) that could call to Heat and update the stack as appropriate in a cron job

Revision history for this message
binou (bintou-16) wrote :

OK .indeed,I have to test elesticity by différents ways..I am interning and it is asked to me to study elasticity by these ways (with capacity planning but i will do it after knowing how to autoscale with the others meterings)

1. CPU (what i did)
2. Run queue :
3. RAM :
4. IO disque :
5. network volume:
6. Application shell call by ceiliometeur :
7. Application indicator seny to the syslog :

If you can help help (how to process) for this(any of them). It would be great

Revision history for this message
binou (bintou-16) wrote :

OK .indeed,I have to test elesticity by différents ways..I am interning and it is asked to me to study elasticity by these ways (with capacity planning but i will do it after knowing how to autoscale with the others meterings)

1. CPU (what i did)
2. Run queue :
3. RAM :
4. IO disque :
5. network volume:
6. Application shell call by ceiliometeur :
7. Application indicator seny to the syslog :

If you can help help (how to process) for this(any of them). It would be great

Revision history for this message
binou (bintou-16) wrote :

Hi,
please help me
I am testing autoscaling with heat and ceilometer .For the ceilometer alarm I would like to use the outputs of a shell script( like a function which return a number and the scale up/scale down will depends on this number according to ceilometer)

So how can I manage the communication betwen my shell script and ceilometer?

Revision history for this message
Steve Baker (steve-stevebaker) wrote :

If you're running the heat-api-cloudwatch API (which is deprecated) then you can use cfn-push-stats to push out custom metrics which will make it to ceilometer (via heat). Otherwise you need to use ceilometer client to send your custom metrics.

Since this is a support request rather than a real bug I'm going to mark it as invalid. ask.openstack.org would be a more appropriate forum for these questions.

Changed in heat:
status: Incomplete → Invalid
Revision history for this message
binou (bintou-16) wrote :

Ok.I am using ceilometer to create samples for custom metrics but what I don't Know is after creating my samples how the communication is managed betwen my script and the ceilometer?.In others words ,I have to call to the script or the ouputs of my script somewhere while creating my new measure.Must I do settings in the ceilometer configuration to includ my script file?

Revision history for this message
binou (bintou-16) wrote :

so how can i send the custom metric to ceilometer?

Revision history for this message
binou (bintou-16) wrote :

Can you help me please about the cfn-push-stats to push out custom metrics which will make it to ceilometer (via heat).I have done alot of research but don't find something efficient.
I am a new user of openstack so can you please tell me how can I do the cfn-push-stats?

Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

As an example, you can check our reference implementation of the template for AWS LoadBalancer resource

https://github.com/openstack/heat/blob/master/heat/engine/resources/aws/lb/loadbalancer.py#L161

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.