there is a chance the http relation is configuring services twice as "target_service"

Bug #1878954 reported by Peter Jose De Sousa
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Kubernetes API Load Balancer
Triaged
Medium
Unassigned

Bug Description

Hello,

Problem:

When relating the kubernetes-workers and masters to the load-balancer the services are being configured twice as "target_service".

Detail:

As a result is nginx will fail to start and the hacluster will show a failed resource with the nginx config looking like this:

# /etc/nginx/sites-enabled/apilb
upstream target_service {
  server 172.25.94.125:6443;
  server 172.25.94.126:6443;

}
upstream target_service {
  server 172.25.94.125:9103;
  server 172.25.94.126:9103;
  server 172.25.94.127:9103;
  server 172.25.94.128:9103;
  server 172.25.94.129:9103;
  server 172.25.94.130:9103;
  server 172.25.94.131:9103;
  server 172.25.94.132:9103;
  server 172.25.94.133:9103;
  server 172.25.94.134:9103;

}

Workaround:

Remove relations:

- kubeapi-load-balancer:loadbalancer kubernetes-master:loadbalancer
- kubeapi-load-balancer:apiserver kubernetes-master:kube-api-endpoint
- kubernetes-worker kubeapi-load-balancer

Wait for all three relations to be removed, then re-add the relations.

The file should come back without the duplicate entry.

Cheers,

Peter

description: updated
Revision history for this message
George Kraft (cynerva) wrote :

We haven't encountered this issue in our testing, so we will need more information to reproduce it.

Can you provide the bundle you encountered this with, along with any cluster operations you ran after deploying it?

Changed in charm-kubeapi-load-balancer:
status: New → Incomplete
Revision history for this message
Peter Jose De Sousa (pjds) wrote :

Hey George, bundle attached.

This was happened immediatly after a deploy, so no cluster actions were applied if I understand you correctly?

Although I realised today that hacluster-mysql might have been interferring with hacluster-k8s, which might have caused some the issue above?

Cheers
Peter

Revision history for this message
George Kraft (cynerva) wrote :

Thanks. Yeah I suspect the hacluster applications have something to do with it.

Changed in charm-kubeapi-load-balancer:
importance: Undecided → High
status: Incomplete → Triaged
George Kraft (cynerva)
Changed in charm-kubeapi-load-balancer:
importance: High → Medium
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.