juju bootstrap fails behind proxy with v3.1 or later

Bug #2038416 reported by Yoshi Kadokawa
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Won't Fix
Undecided
Unassigned

Bug Description

When bootstrapping behind proxy with the following command, it fails with the following error message.

$ cat model_defaults.yaml
apt-http-proxy: http://192.168.1.10:8000
apt-https-proxy: http://192.168.1.10:8000
juju-http-proxy: http://192.168.1.10:8000
juju-https-proxy: http://192.168.1.10:8000
juju-no-proxy: 10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,127.0.0.1,localhost
snap-http-proxy: http://192.168.1.10:8000
snap-https-proxy: http://192.168.1.10:8000

$ juju bootstrap --model-default ./model_defaults.yaml maas_cloud maas-controller
Creating Juju controller "maas-controller" on maas_cloud
Looking for packaged Juju agent version 3.1.5 for amd64
Located Juju agent version 3.1.5-ubuntu-amd64 at https://streams.canonical.com/juju/tools/agent/3.1.5/juju-3.1.5-linux-amd64.tgz
Launching controller instance(s) on maas_cloud...
 - pecm6n (arch=amd64 mem=24G cores=8)
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 192.168.1.73:22
Connected to 192.168.1.73
Running machine configuration script...
Cloud-init v. 23.1.1-0ubuntu0~22.04.1 running 'init-local' at Wed, 04 Oct 2023 02:44:26 +0000. Up 6.48 seconds.
--- SKIP OUTPUTS ---
 controller-name:maas-controller controller-uuid:daca7aea-e4cd-4a9f-8a8b-1bf83a631066 juju-db-snap-channel:4.4/stable max-agent-state-size:524288 max-charm-state-size:2097152 max-debug-log-duration:24h0m0s max-prune-txn-batch-size:1000000 max-prune-txn-passes:100 max-txn-log-size:10M metering-url:https://api.jujucharms.com/omnibus/v3 migration-agent-wait-time:15m0s model-logfile-max-backups:2 model-logfile-max-size:10M model-logs-size:20M mongo-memory-profile:default non-synced-writes-to-raft-log:false prune-txn-query-count:1000 prune-txn-sleep-time:10ms set-numa-control-policy:false state-port:37017] ControllerCharmPath: ControllerCharmChannel:3.1/stable ControllerInheritedConfig:map[apt-http-proxy:http://192.168.1.10:8000 apt-https-proxy:http://192.168.1.10:8000 juju-http-proxy:http://192.168.1.10:8000 juju-https-proxy:http://192.168.1.10:8000 juju-no-proxy:10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,127.0.0.1,localhost snap-http-proxy:http://192.168.1.10:8000 snap-https-proxy:http://192.168.1.10:8000] RegionInheritedConfig:map[] InitialModelConfig:map[] BootstrapMachineInstanceId:pecm6n BootstrapMachineDisplayName:juju-1 BootstrapMachineConstraints:arch=amd64 mem=3584M tags=juju BootstrapMachineHardwareCharacteristics:arch=amd64 cores=8 mem=24576M tags=virtual,pod-console-logging,juju availability-zone=az1 ModelConstraints: CustomImageMetadata:[] StoragePools:map[]} BootstrapMachineAddresses:[local-cloud:192.168.1.73@oam-space(id:1)] BootstrapMachineJobs:[JobManageModel JobHostUnits] SharedSecret:XXX Provider:0x1b0f7c0 StorageProviderRegistry:[0xc000158140 {Providers:map[loop:0xc000132c90 rootfs:0xc000132c98 tmpfs:0xc000132ca0]}]}
2023-10-04 02:57:30 INFO juju.state addmachine.go:506 new machine "0" has preferred addresses: private "local-cloud:192.168.1.73@space:1", public "local-cloud:192.168.1.73@space:1"
2023-10-04 02:58:05 WARNING juju.state.pool.txnwatcher txnwatcher.go:338 txn watcher sync error: tomb: dying
2023-10-04 02:58:05 WARNING juju.state.pool.txnwatcher txnwatcher.go:346 txn watcher resume queued
ERROR cannot deploy controller application: deploying charmhub controller charm: resolving "ch:juju-controller": resolving with preferred channel: Post "https://api.charmhub.io/v2/charms/refresh": dial tcp 185.125.188.54:443: i/o timeout

This also happens with Juju v3.2, however, it does work with v2.9.
Also it does work when using the legacy http-proxy configuration with v3.1 or later.
$ cat model_defaults.yaml
http-proxy: http://192.168.1.10:8000
https-proxy: http://192.168.1.10:8000
juju-no-proxy: 127.0.0.1,localhost, <list of IPs>

description: updated
Revision history for this message
Yoshi Kadokawa (yoshikadokawa) wrote :

JFYI, the host where juju bootstrap was run, the proxy is configured via /etc/environment, so the juju client should have access to the internet.

$ env | grep -i proxy
no_proxy=localhost,127.0.0.1,<IP_ADDRESES>
https_proxy=http://192.168.1.10:8000/
HTTPS_PROXY=http://192.168.1.10:8000/
HTTP_PROXY=http://192.168.1.10:8000/
http_proxy=http://192.168.1.10:8000/

I have attached the full juju bootstrap output with '<root>=TRACE'

As this is hitting customer environment, and have a workaround with the legacy http-proxy configuration.
I'm subscribing this to field-high

Revision history for this message
Joseph Phillips (manadart) wrote :

The "juju-" prefixed proxy declarations are set in the environment for hook contexts, so that the charm may use them at its discretion.

The ones you refer to as legacy are the ones that are set in the machine's /etc/profile.d and apply to all requests.

The Charmhub client is not running in a hook, and so requires the latter to be set in order to proxy correctly.

If you don't want to set it for all models, I think you can pass it simple as config to bootstrap rather than model-defaults.

Changed in juju:
status: New → Won't Fix
Revision history for this message
Nobuto Murata (nobuto) wrote :

Hmm, one of the reasons that juju-*-proxy configs were born was to overcome challenges with no_proxy's limitation buffer limitation. juju-no-proxy got more flexibility over no-proxy since it can accept CIDRs (e.g. 192.168.0.0/16) instead of 192.168.0.1,192.168.0.2,192.168.0.3,... that will run out of the buffer.

Fetching the controller charm during bootstrap is a new thing for 3.0 or newer but going back to http-proxy , https-proxy and no-proxy is a bit weird to me since we have 3 categories of proxy config as juju, apt, and snap and Juju doesn't use the juju category. Can't we use the juju category for communications from juju binary instead of falling back to the global proxy config?

Revision history for this message
Nobuto Murata (nobuto) wrote :

Please reopen this. There are side effects such as https://bugs.launchpad.net/juju/+bug/2044481

Revision history for this message
Ian Booth (wallyworld) wrote :

Let's comment on the newer bug to save having multiple conversations.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.