I'm trying to set up armhf testing on an arm64 host, as that's what we have in Scalingstack (no armhf images yet). The host is Ubuntu 15.10, with lxd 0.20-0ubuntu4.1 (no PPA).
$ uname -a
Linux arm64-lxd-test 4.2.0-18-generic #22-Ubuntu SMP Fri Nov 6 19:56:51 UTC 2015 aarch64 aarch64 aarch64 GNU/Linux
$ lxc launch ubuntu/xenial/armhf x1
Starting the container throws no error, and with debugging I don't see anything bad:
$ lxc start x1 --debug --verbose
DBUG[12-02|13:36:56] Fingering the daemon
DBUG[12-02|13:36:56] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_compat":1,"auth":"trusted","config":{"core.https_address":"10.43.41.223","images.remote_cache_expiry":"10"},"environment":{"addresses":["10.43.41.223"],"architectures":[4,3],"driver":"lxc","driver_version":"1.1.4","kernel":"Linux","kernel_architecture":"aarch64","kernel_version":"4.2.0-18-generic","server":"lxd","server_pid":1339,"server_version":"0.20","storage":"dir","storage_version":""}}}
DBUG[12-02|13:36:56] Pong received
DBUG[12-02|13:36:56] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":0,"config":{"volatile.base_image":"a406edc85653e7b3232ea1ae77e35b67dd42574cb4c7335e9b586a6b4ad6223c","volatile.eth0.hwaddr":"00:16:3e:38:aa:2c","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536}]"},"devices":{},"ephemeral":false,"expanded_config":{"volatile.base_image":"a406edc85653e7b3232ea1ae77e35b67dd42574cb4c7335e9b586a6b4ad6223c","volatile.eth0.hwaddr":"00:16:3e:38:aa:2c","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"hwaddr":"00:16:3e:38:aa:2c","nictype":"bridged","parent":"lxcbr0","type":"nic"}},"name":"x1","profiles":["default"],"status":{"status":"Stopped","status_code":102,"init":0,"ips":null}}}
DBUG[12-02|13:36:56] Putting {"action":"start","force":false,"timeout":-1}
to http://unix.socket/1.0/containers/x1/state
DBUG[12-02|13:36:56] Raw response: {"type":"async","status":"OK","status_code":100,"operation":"/1.0/operations/f17b8722-1573-4af8-a365-bc450bce6654","resources":null,"metadata":null}
DBUG[12-02|13:36:56] 1.0/operations/f17b8722-1573-4af8-a365-bc450bce6654/wait
DBUG[12-02|13:36:57] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"created_at":"2015-12-02T13:36:56.76183Z","updated_at":"2015-12-02T13:36:57.059047Z","status":"Success","status_code":200,"resources":null,"metadata":null,"may_cancel":false}}
But the container is not running afterwards. I'm attaching /var/log/lxd/x1/lxc.log, but the most interesting bits are several
WARN lxc_cgmanager - cgmanager.c:cgm_get:993 - do_cgm_get exited with error
and
NOTICE lxc_start - start.c:post_start:1265 - '/sbin/init' started with pid '2028'
WARN lxc_start - start.c:signal_handler:310 - invalid pid for SIGCHLD
DEBUG lxc_commands - commands.c:lxc_cmd_handler:893 - peer has disconnected
DEBUG lxc_commands - commands.c:lxc_cmd_handler:893 - peer has disconnected
DEBUG lxc_commands - commands.c:lxc_cmd_get_state:579 - 'x1' is in 'RUNNING' state
DEBUG lxc_start - start.c:signal_handler:314 - container init process exited
cgmanager.service itself is active and running, though.
Is there some way to get a console for this, like we used to have with "lxc-start -n foo -F"?
I'm trying to set up armhf testing on an arm64 host, as that's what we have in Scalingstack (no armhf images yet). The host is Ubuntu 15.10, with lxd 0.20-0ubuntu4.1 (no PPA).
$ uname -a
Linux arm64-lxd-test 4.2.0-18-generic #22-Ubuntu SMP Fri Nov 6 19:56:51 UTC 2015 aarch64 aarch64 aarch64 GNU/Linux
$ lxc launch ubuntu/xenial/armhf x1
Starting the container throws no error, and with debugging I don't see anything bad:
$ lxc start x1 --debug --verbose 02|13:36: 56] Fingering the daemon 02|13:36: 56] Raw response: {"type" :"sync" ,"status" :"Success" ,"status_ code":200, "metadata" :{"api_ compat" :1,"auth" :"trusted" ,"config" :{"core. https_address" :"10.43. 41.223" ,"images. remote_ cache_expiry" :"10"}, "environment" :{"addresses" :["10.43. 41.223" ],"architecture s":[4,3] ,"driver" :"lxc", "driver_ version" :"1.1.4" ,"kernel" :"Linux" ,"kernel_ architecture" :"aarch64" ,"kernel_ version" :"4.2.0- 18-generic" ,"server" :"lxd", "server_ pid":1339, "server_ version" :"0.20" ,"storage" :"dir", "storage_ version" :""}}}
DBUG[12-
DBUG[12-
DBUG[12- 02|13:36: 56] Pong received 02|13:36: 56] Raw response: {"type" :"sync" ,"status" :"Success" ,"status_ code":200, "metadata" :{"architecture ":0,"config" :{"volatile. base_image" :"a406edc85653e 7b3232ea1ae77e3 5b67dd42574cb4c 7335e9b586a6b4a d6223c" ,"volatile. eth0.hwaddr" :"00:16: 3e:38:aa: 2c","volatile. last_state. idmap": "[{\"Isuid\ ":true, \"Isgid\ ":false, \"Hostid\ ":100000, \"Nsid\ ":0,\"Maprange\ ":65536} ,{\"Isuid\ ":false, \"Isgid\ ":true, \"Hostid\ ":100000, \"Nsid\ ":0,\"Maprange\ ":65536} ]"},"devices" :{},"ephemeral" :false, "expanded_ config" :{"volatile. base_image" :"a406edc85653e 7b3232ea1ae77e3 5b67dd42574cb4c 7335e9b586a6b4a d6223c" ,"volatile. eth0.hwaddr" :"00:16: 3e:38:aa: 2c","volatile. last_state. idmap": "[{\"Isuid\ ":true, \"Isgid\ ":false, \"Hostid\ ":100000, \"Nsid\ ":0,\"Maprange\ ":65536} ,{\"Isuid\ ":false, \"Isgid\ ":true, \"Hostid\ ":100000, \"Nsid\ ":0,\"Maprange\ ":65536} ]"},"expanded_ devices" :{"eth0" :{"hwaddr" :"00:16: 3e:38:aa: 2c","nictype" :"bridged" ,"parent" :"lxcbr0" ,"type" :"nic"} },"name" :"x1"," profiles" :["default" ],"status" :{"status" :"Stopped" ,"status_ code":102, "init": 0,"ips" :null}} }
DBUG[12-
DBUG[12- 02|13:36: 56] Putting {"action" :"start" ,"force" :false, "timeout" :-1} unix.socket/ 1.0/containers/ x1/state 02|13:36: 56] Raw response: {"type" :"async" ,"status" :"OK"," status_ code":100, "operation" :"/1.0/ operations/ f17b8722- 1573-4af8- a365-bc450bce66 54","resources" :null," metadata" :null}
to http://
DBUG[12-
DBUG[12- 02|13:36: 56] 1.0/operations/ f17b8722- 1573-4af8- a365-bc450bce66 54/wait 02|13:36: 57] Raw response: {"type" :"sync" ,"status" :"Success" ,"status_ code":200, "metadata" :{"created_ at":"2015- 12-02T13: 36:56.76183Z" ,"updated_ at":"2015- 12-02T13: 36:57.059047Z" ,"status" :"Success" ,"status_ code":200, "resources" :null," metadata" :null," may_cancel" :false} }
DBUG[12-
But the container is not running afterwards. I'm attaching /var/log/ lxd/x1/ lxc.log, but the most interesting bits are several
WARN lxc_cgmanager - cgmanager. c:cgm_get: 993 - do_cgm_get exited with error
and
NOTICE lxc_start - start.c: post_start: 1265 - '/sbin/init' started with pid '2028' signal_ handler: 310 - invalid pid for SIGCHLD c:lxc_cmd_ handler: 893 - peer has disconnected c:lxc_cmd_ handler: 893 - peer has disconnected c:lxc_cmd_ get_state: 579 - 'x1' is in 'RUNNING' state signal_ handler: 314 - container init process exited
WARN lxc_start - start.c:
DEBUG lxc_commands - commands.
DEBUG lxc_commands - commands.
DEBUG lxc_commands - commands.
DEBUG lxc_start - start.c:
cgmanager.service itself is active and running, though.
Is there some way to get a console for this, like we used to have with "lxc-start -n foo -F"?