Instances in ELB are OutOfService
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
AWS Integrator Charm |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Using the charm with a K8s LoadBalancer:
```
Name: veering-
Namespace: default
Labels: app=veering-
Annotations: <none>
Selector: app=veering-
Type: LoadBalancer
IP: 10.152.183.129
LoadBalancer Ingress: a227bf17f571c11
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32511/TCP
Endpoints:
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31432/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBal
Normal EnsuredLoadBalancer 23m service-controller Ensured load balancer
```
All looks fine from the K8s end, but if I try and access the LoadBalancer Ingress URL I get "The connection was reset".
Looking in the ELB console, it seems the heathchecks fail on all 3 nodes (which are the K8s worker nodes), so they've all been put in "OutOfService" and aren't routed too.
Steps to repro? When I follow the instructions at https:/ /www.ubuntu. com/kubernetes/ docs/aws- integration things work fine for me. Is it possible that you forgot to specify a port for your veering- sasquatch- drupal deployment to listen on?