kubectl exec fails via kubeapi-load-balancer when StreamingProxyRedirects feature is disabled

Bug #1940527 reported by George Kraft
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Kubernetes API Load Balancer
Fix Released
High
Kevin W Monroe
Kubernetes Control Plane Charm
Fix Released
High
George Kraft

Bug Description

The StreamingProxyRedirects feature in Kubernetes has been deprecated.[1] In Kubernetes 1.22, it was changed to disabled by default. The feature will be removed completely in Kubernetes 1.24.

When running a `kubectl exec` command, if the connection goes through kubeapi-load-balancer, and StreamingProxyRedirects is disabled, then the command fails with:

error: unable to upgrade connection: unable to upgrade: missing upgrade headers in request: http.Header{"Connection":[]string{"close", "Upgrade"}, "Content-Length":[]string{"0"}, "Kubectl-Command":[]string{"kubectl exec"}, "Kubectl-Session":[]string{"3939a91f-322d-4cfc-b5e3-0476a7f92157"}, "Upgrade":[]string{"SPDY/3.1"}, "User-Agent":[]string{"kubectl/v1.22.0 (linux/amd64) kubernetes/c2b5237"}, "X-Forwarded-For":[]string{"10.246.154.107, 10.246.154.51, 10.246.154.107"}, "X-Forwarded-Proto":[]string{"https"}, "X-Real-Ip":[]string{"10.246.154.107"}, "X-Stream-Protocol-Version":[]string{"v4.channel.k8s.io"}}

If kubeapi-load-balancer is removed from the connection path, or StreamingProxyRedirects is enabled, then the command works fine.

The logging around this in kube-apiserver and kubelet, even with v=100, is sparse. Somewhere between kube-apiserver and kubelet, the Connection header changes from "Upgrade" to "close Upgrade", and that is likely killing the connection. It is unclear where the "close" comes from.

It is also unclear how StreamingProxyRedirects relates to this. StreamingProxyRedirects makes kube-apiserver follow redirects from Kubelet, but Kubelet is not configured with --redirect-container-streaming so it should not be sending redirects anyway.

[1]: https://github.com/kubernetes/enhancements/issues/1558

George Kraft (cynerva)
Changed in charm-kubeapi-load-balancer:
importance: Undecided → High
Changed in charm-kubernetes-master:
importance: Undecided → High
Changed in charm-kubeapi-load-balancer:
status: New → Triaged
Changed in charm-kubernetes-master:
status: New → Triaged
Revision history for this message
Kevin W Monroe (kwmonroe) wrote :
Changed in charm-kubernetes-master:
milestone: none → 1.22
Changed in charm-kubeapi-load-balancer:
status: Triaged → Invalid
Changed in charm-kubernetes-master:
status: Triaged → Fix Committed
assignee: nobody → George Kraft (cynerva)
George Kraft (cynerva)
Changed in charm-kubeapi-load-balancer:
status: Invalid → Triaged
Revision history for this message
Cory Johns (johnsca) wrote :

The StreamingProxyRedirects flag appears to be directly related to SPDY / HTTP/2 [1], and it seems that NGINX doesn't support `proxy_http_version 2.0` [2] (instead, only 1.0 and 1.1 [3], and we seem to be relying on the default [4] of 1.0), which apparently is required to support HTTP/2 past the edge [5]. It seems like this is interfering with the setup of the complete chain of upgraded streaming connections [6]. I still don't entirely understand how, but George was able to track it down to this block of code [7] in Kubernetes; when StreamingProxyRedirects=true, it leads to h.InterceptRedirects being set which causes the first block to execute rather than the second.

It seems like the best fix would probably be to change kubeapi-load-balancer to set up an NGINX TCP load balancer [8] instead. We actually already are requesting a TCP LB [9] and kubeapi-load-balancer is ignoring that and still giving us an HTTP LB.

[1]: https://github.com/kubernetes/kubernetes/blob/96e4e953978416e164e001abd2c607ce357fdd46/staging/src/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go#L227
[2]: https://trac.nginx.org/nginx/ticket/923
[3]: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version
[4]: https://github.com/charmed-kubernetes/charm-kubeapi-load-balancer/blob/master/templates/apilb.conf#L37-L38
[5]: https://stackoverflow.com/questions/41637076/http2-with-node-js-behind-nginx-proxy
[6]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1558-streaming-proxy-redirects#proposal
[7]: https://github.com/kubernetes/kubernetes/blob/f7b23cf6d04a14557b130f89ced8f7d074077fe7/staging/src/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go#L291-L301
[8]: https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
[9]: https://github.com/charmed-kubernetes/charm-kubernetes-master/blob/master/reactive/kubernetes_master.py#L1393

Changed in charm-kubernetes-master:
status: Fix Committed → Fix Released
Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

Workaround by explicitly setting http 1.1 on the k-l-b unit (i'm guessing requests from kubectl-1.24 defaults to http/1.2, which doesn't honor the 'Upgrade' header):

juju run --unit kubeapi-load-balancer/0 -- sed -i -E "s/proxy_buffering\s+off;/proxy_buffering off;\nproxy_http_version 1.1;/g" /var/lib/juju/agents/unit-kubeapi-load-balancer-0/charm/templates/apilb.conf

Once the template is re-rendered (which happens every 5 minutes on update-status), kubectl exec should work again.

Changed in charm-kubeapi-load-balancer:
milestone: none → 1.24
assignee: nobody → Kevin W Monroe (kwmonroe)
status: Triaged → In Progress
Revision history for this message
Kevin W Monroe (kwmonroe) wrote :
Changed in charm-kubeapi-load-balancer:
status: In Progress → Fix Committed
Changed in charm-kubeapi-load-balancer:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.