create lb when service is created

Bug #1529760 reported by XuXinkun
This bug affects 1 person
Affects Status Importance Assigned to Milestone

Bug Description

The way to create load balancer for k8s service recently is like this
But this way has some bad effects. For example, you should edit the template and insert the password into master.
I have an idea that changing the template when the api of coe-service is called. The template contains not only the normal resources, but also a new LB pool, floating ip for LB and pool members. Then pass the new template to heat to change the stack.
How about this way? I have coded it and it can work.

Revision history for this message
hongbin (hongbin034) wrote :

Hi Xuxinkun,

Could you elaborate how your approach will work. We need more details before deciding whether it is preferable.

Changed in magnum:
status: New → Incomplete
Revision history for this message
XuXinkun (xuxinkun) wrote :

OK. In my opinion, service create will follow this way:
1. get bay stack, check whether it is in UPDATE_COMPLETE or CREATE_COMPLETE state. if no, return state error.
2. create namespace service with k8s api.
3. get the response of the creating namespace service. check the ports, whether it has "node_port". if no, return service.
4. get the bay stack template, and then modify the template. I pick part of my template here:
 service_pool_template = """
  type: OS::Neutron::Pool
    protocol: {get_param: loadbalancing_protocol}
    subnet: {get_resource: fixed_subnet}
    lb_method: ROUND_ROBIN
   protocol_port: %(port)s

  type: OS::Neutron::FloatingIP
    - extrouter_inside
    floating_network: {get_param: external_network}
    port_id: {get_attr: [%(service_pool)s, vip, port_id]}

 service_pool_memeber_template = """
  type: OS::Neutron::PoolMember
    pool_id: {get_resource: %(service_pool)s}
    address: {get_attr: [kube_minions, kube_minion_ip, %(node_count)s]}
    protocol_port: %(node_port)s

 service_output_template = """
    template: service_ip_address
      service_ip_address: {get_attr: [%(service_pool_floating)s, floating_ip_address]}
   description: >
     This is the Service endpoint of %(service_pool)s.

 service_pool_template is using to create pool and floating ip for service.
 service_pool_memeber_template is using to create pool member for pool. every node will be a pool member.
 service_output_template is using to output the floating ip as the service ip.
5. update the bay stack with new template.

Revision history for this message
XuXinkun (xuxinkun) wrote :

By the way, service update/delete will follow the similar way as create.

Changed in magnum:
status: Incomplete → New
Revision history for this message
hongbin (hongbin034) wrote :


It seems what you were suggesting is to implement our own external load balancer solution, which replaces the k8s equivalent. Personally, I don't think it is a good idea, mainly because the proposed solution looks more complicated. I have a feeling that it will be error-prone to have a template that is changing at runtime. I think a static declared template will be more easy to maintain. Do you agree?

Revision history for this message
XuXinkun (xuxinkun) wrote :

I am afraid that i can not agree.
Because the solution as ( describe, have many bad effect. For example, when you want to delete a bay, the load balancer will not be deleted until you delete it manually. And also, you have to inject password and some openstack info into master.
Changing template at runtime does not mean many errors may happen. We can upgrade the heat engine and use some methods to ensure the updating stack successful. Updating stack in heat is a good feature. I think we can take good care of it.

Revision history for this message
hongbin (hongbin034) wrote :


We have plan to address the password issue. For the load balaner cleanup, I think it is a valid concern. @Ton, what is your opinion about that?

Revision history for this message
Egor Guz (eghobo) wrote :

@XuXinkun: could you elaborate how this proposal is going to work for native client/api (e.g. kubectl)? Also keep in mind that Kub cloud provider already has the same logic. I saw proposal for 1.2 about implementing different types of service LBs instead of NAT (similar to but it will be Kub plugin not Magnum.

Revision history for this message
Ton Ngo (ton-i) wrote :

Thank you for looking into the K8S load balancer. Your suggestion was essentially our initial thought when we implemented the support for load balancer. However, we did not choose this approach because it gets complicated very quickly. The services in K8S are essentially proxy and secondary load balancer for pods, and the set of pods can change any time. This means the members in the pool for the external load balancer needs to be updated accordingly to respond to the change, and the K8S internal modules are best suited for doing this since they manage the services themselves. If we were to implement this in Magnum, we would have to somehow capture all the changes happening in the services and that is very difficult, especially if the user is using the K8S native interface. K8S provides the plugin for OpenStack to do precisely this.

Your concern about managing the password is valid and the current manual approach is a temporary solution to avoid weakening the security in Magnum. The full solution that we are working on is to create an internal service account (trust) with the same privilege as the user; this account and password would be written to the configuration in the master nodes so that K8S can use it to interface with OpenStack. Once this feature is implemented, the user would no longer need to manually enter the password.

Your concern about having to clean up the load balancer is also valid. The normal procedure is to clean up all services in the K8S cluster before deleting the cluster, and this would clean up the Neutron load balancers. But is this step is not taken, then the load balancers would need to be cleaned up manually. A good improvement would be to check for load balancers created for K8S services and clean them up if necessary. If you are interested in this, you are very welcome to write a blueprint and contribute.

Revision history for this message
XuXinkun (xuxinkun) wrote :

@Ton Ngo
How about this idea? Booting a new service called magnum-watcher.
magnum-watcher will make use of the "service watch" of kubernetes api. When any change of kubernetes service happens, magnum-watcher will catch it. Then the magnum-watcher starts a new task to list the services and create/delete/update load balancers for the service. Then, magnum-watcher watches k8s service again and waited for next change.
magnum-watcher is also defined in template and created by stack. when stack is deleted, magnum-watcher will clean all the LBs and some other infos.

Changed in magnum:
importance: Undecided → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.