Comment 0 for bug 2038416

Revision history for this message
Yoshi Kadokawa (yoshikadokawa) wrote :

When bootstrapping behind proxy with the following command, it fails with the following error message.

$ cat model_defaults.yaml
apt-http-proxy: http://192.168.1.10:8000
apt-https-proxy: http://192.168.1.10:8000
juju-http-proxy: http://192.168.1.10:8000
juju-https-proxy: http://192.168.1.10:8000
juju-no-proxy: 10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,127.0.0.1,localhost
snap-http-proxy: http://192.168.1.10:8000
snap-https-proxy: http://192.168.1.10:8000

$ juju bootstrap --model-default ./model_defaults.yaml maas_cloud maas-controller
Creating Juju controller "maas-controller" on maas_cloud
Looking for packaged Juju agent version 3.1.5 for amd64
Located Juju agent version 3.1.5-ubuntu-amd64 at https://streams.canonical.com/juju/tools/agent/3.1.5/juju-3.1.5-linux-amd64.tgz
Launching controller instance(s) on maas_cloud...
 - pecm6n (arch=amd64 mem=24G cores=8)
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 192.168.1.73:22
Connected to 192.168.1.73
Running machine configuration script...
Cloud-init v. 23.1.1-0ubuntu0~22.04.1 running 'init-local' at Wed, 04 Oct 2023 02:44:26 +0000. Up 6.48 seconds.
--- SKIP OUTPUTS ---
 controller-name:maas-controller controller-uuid:daca7aea-e4cd-4a9f-8a8b-1bf83a631066 juju-db-snap-channel:4.4/stable max-agent-state-size:524288 max-charm-state-size:2097152 max-debug-log-duration:24h0m0s max-prune-txn-batch-size:1000000 max-prune-txn-passes:100 max-txn-log-size:10M metering-url:https://api.jujucharms.com/omnibus/v3 migration-agent-wait-time:15m0s model-logfile-max-backups:2 model-logfile-max-size:10M model-logs-size:20M mongo-memory-profile:default non-synced-writes-to-raft-log:false prune-txn-query-count:1000 prune-txn-sleep-time:10ms set-numa-control-policy:false state-port:37017] ControllerCharmPath: ControllerCharmChannel:3.1/stable ControllerInheritedConfig:map[apt-http-proxy:http://192.168.1.10:8000 apt-https-proxy:http://192.168.1.10:8000 juju-http-proxy:http://192.168.1.10:8000 juju-https-proxy:http://192.168.1.10:8000 juju-no-proxy:10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,127.0.0.1,localhost snap-http-proxy:http://192.168.1.10:8000 snap-https-proxy:http://192.168.1.10:8000] RegionInheritedConfig:map[] InitialModelConfig:map[] BootstrapMachineInstanceId:pecm6n BootstrapMachineDisplayName:juju-1 BootstrapMachineConstraints:arch=amd64 mem=3584M tags=juju BootstrapMachineHardwareCharacteristics:arch=amd64 cores=8 mem=24576M tags=virtual,pod-console-logging,juju availability-zone=az1 ModelConstraints: CustomImageMetadata:[] StoragePools:map[]} BootstrapMachineAddresses:[local-cloud:192.168.1.73@oam-space(id:1)] BootstrapMachineJobs:[JobManageModel JobHostUnits] SharedSecret:XXX Provider:0x1b0f7c0 StorageProviderRegistry:[0xc000158140 {Providers:map[loop:0xc000132c90 rootfs:0xc000132c98 tmpfs:0xc000132ca0]}]}
2023-10-04 02:57:30 INFO juju.state addmachine.go:506 new machine "0" has preferred addresses: private "local-cloud:192.168.1.73@space:1", public "local-cloud:192.168.1.73@space:1"
2023-10-04 02:58:05 WARNING juju.state.pool.txnwatcher txnwatcher.go:338 txn watcher sync error: tomb: dying
2023-10-04 02:58:05 WARNING juju.state.pool.txnwatcher txnwatcher.go:346 txn watcher resume queued
ERROR cannot deploy controller application: deploying charmhub controller charm: resolving "ch:juju-controller": resolving with preferred channel: Post "https://api.charmhub.io/v2/charms/refresh": dial tcp 185.125.188.54:443: i/o timeout

This also happens with Juju v3.2, however, it does work with v2.9.
Also when using the legacy http-proxy configuration works as well.
$ cat model_defaults.yaml
http-proxy: http://192.168.1.10:8000
https-proxy: http://192.168.1.10:8000
juju-no-proxy: 127.0.0.1,localhost, <list of IPs>