ne metadata={} admin_password= files=[(u'/home/ubuntu/guest.log', '\n 36c4bd4f-c4c7-4a8f-b76c-09bea6479ae7\n instance-0000000c\n 2097152\n 1\n \n \n \n test-arm64-1\n 2016-07-02 07:09:48\n \n 2048\n 20\n 0\n 0\n 1\n \n \n admin\n admin\n \n \n \n \n \n hvm\n /opt/stack/data/nova/instances/36c4bd4f-c4c7-4a8f-b76c-09bea6479ae7/kernel\n /opt/stack/data/nova/instances/36c4bd4f-c4c7-4a8f-b76c-09bea6479ae7/ramdisk\n root=/dev/sda console=tty0 console=ttyS0\n \n \n \n \n \n \n 1024\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n')] partition=None from (pid=9095) inject_data /opt/stack/nova/nova/virt/disk/api.py:399 2016-07-04 09:17:40.407 DEBUG nova.virt.disk.vfs.api [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] Instance for image image= partition=None from (pid=9095) instance_for_image /opt/stack/nova/nova/virt/disk/vfs/api.py:51 2016-07-04 09:17:40.408 DEBUG nova.virt.disk.vfs.api [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] Using primary VFSGuestFS from (pid=9095) instance_for_image /opt/stack/nova/nova/virt/disk/vfs/api.py:55 libguestfs: trace: set_verbose true libguestfs: trace: set_verbose = 0 libguestfs: create: flags = 0, handle = 0x2011a590, program = python libguestfs: trace: set_program "nova-compute" libguestfs: trace: set_program = 0 libguestfs: trace: add_drive "/dev/null" libguestfs: trace: get_tmpdir libguestfs: trace: get_tmpdir = "/tmp" libguestfs: trace: disk_create "/tmp/libguestfsTTsXUC/devnull1" "raw" 4096 libguestfs: trace: disk_create = 0 libguestfs: trace: add_drive = 0 libguestfs: trace: launch libguestfs: trace: version libguestfs: trace: version = libguestfs: trace: get_backend libguestfs: trace: get_backend = "direct" libguestfs: launch: program=nova-compute libguestfs: launch: version=1.32.2 libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=direct libguestfs: launch: tmpdir=/tmp/libguestfsTTsXUC libguestfs: launch: umask=0022 libguestfs: launch: euid=1002 libguestfs: is_openable: /dev/kvm: Permission denied libguestfs: trace: get_backend_setting "force_tcg" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: warning: current user is not a member of the KVM group (group ID 121). This user cannot access /dev/kvm, so libguestfs may run very slowly. It is recommended that you 'chmod 0666 /dev/kvm' or add the current user to the KVM group (you might need to log out and log in again). libguestfs: trace: get_cachedir libguestfs: trace: get_cachedir = "/var/tmp" libguestfs: [00000ms] begin building supermin appliance libguestfs: [00000ms] run supermin libguestfs: command: run: /usr/bin/supermin libguestfs: command: run: \ --build libguestfs: command: run: \ --verbose libguestfs: command: run: \ --if-newer libguestfs: command: run: \ --lock /var/tmp/.guestfs-1002/lock libguestfs: command: run: \ --copy-kernel libguestfs: command: run: \ -f ext2 libguestfs: command: run: \ --host-cpu aarch64 libguestfs: command: run: \ /usr/lib/aarch64-linux-gnu/guestfs/supermin.d libguestfs: command: run: \ -o /var/tmp/.guestfs-1002/appliance.d supermin: version: 5.1.14 supermin: package handler: debian/dpkg supermin: acquiring lock on /var/tmp/.guestfs-1002/lock supermin: build: /usr/lib/aarch64-linux-gnu/guestfs/supermin.d supermin: reading the supermin appliance supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/base.tar.gz type gzip base image (tar) supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar) supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/excludefiles type uncompressed excludefiles supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/hostfiles type uncompressed hostfiles supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/init.tar.gz type gzip base image (tar) supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/packages type uncompressed packages supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/packages-hfsplus type uncompressed packages supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/packages-reiserfs type uncompressed packages supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/packages-xfs type uncompressed packages supermin: build: visiting /usr/lib/aarch64-linux-gnu/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar) supermin: mapping package names to installed packages supermin: resolving full list of package dependencies supermin: build: 193 packages, including dependencies supermin: build: 6856 files supermin: build: 3800 files, after matching excludefiles supermin: build: 3809 files, after adding hostfiles supermin: build: 3809 files, after removing unreadable files supermin: build: 3813 files, after munging supermin: kernel: picked kernel vmlinuz-4.4.0-28-generic cp: cannot open '/boot/vmlinuz-4.4.0-28-generic' for reading: Permission denied supermin: cp -p '/boot/vmlinuz-4.4.0-28-generic' '/var/tmp/.guestfs-1002/appliance.d.kxo2ix12/kernel': command failed, see earlier errors libguestfs: trace: launch = -1 (error) 2016-07-04 09:17:40.815 ERROR nova.virt.libvirt.driver [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Error injecting data into image 289f8d67-f641-4ba0-b15a-7a0b7c99942a (libguestfs installed but not usable (/usr/bin/supermin exited with error status 1, see debug messages above)) 2016-07-04 09:17:40.816 ERROR nova.compute.manager [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Instance failed to spawn 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Traceback (most recent call last): 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/compute/manager.py", line 2063, in _build_resources 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] yield resources 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/compute/manager.py", line 1907, in _build_and_run_instance 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] block_device_info=block_device_info) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2657, in spawn 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] admin_pass=admin_password) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3060, in _create_image 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] files) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2981, in _inject_data 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] instance=instance) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] self.force_reraise() 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] six.reraise(self.type_, self.value, self.tb) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2975, in _inject_data 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] mandatory=('files',)) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/virt/disk/api.py", line 401, in inject_data 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] fs = vfs.VFS.instance_for_image(image, partition) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/virt/disk/vfs/api.py", line 62, in instance_for_image 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] vfs.inspect_capabilities() 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] File "/opt/stack/nova/nova/virt/disk/vfs/guestfs.py", line 85, in inspect_capabilities 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] _("libguestfs installed but not usable (%s)") % e) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] NovaException: libguestfs installed but not usable (/usr/bin/supermin exited with error status 1, see debug messages above) 2016-07-04 09:17:40.816 TRACE nova.compute.manager [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] 2016-07-04 09:17:40.819 INFO nova.compute.manager [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Terminating instance 2016-07-04 09:17:40.821 DEBUG nova.compute.manager [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Start destroying the instance on the hypervisor. from (pid=9095) _shutdown_instance /opt/stack/nova/nova/compute/manager.py:2175 2016-07-04 09:17:40.825 INFO nova.virt.libvirt.driver [-] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] During wait destroy, instance disappeared. 2016-07-04 09:17:40.826 INFO nova.virt.libvirt.firewall [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Attempted to unfilter instance which is not filtered 2016-07-04 09:17:40.827 DEBUG oslo_concurrency.processutils [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] Running cmd (subprocess): mv /opt/stack/data/nova/instances/b19dac5b-ac49-4a1c-8f88-baaa91565867 /opt/stack/data/nova/instances/b19dac5b-ac49-4a1c-8f88-baaa91565867_del from (pid=9095) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344 2016-07-04 09:17:40.934 DEBUG oslo_concurrency.processutils [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] CMD "mv /opt/stack/data/nova/instances/b19dac5b-ac49-4a1c-8f88-baaa91565867 /opt/stack/data/nova/instances/b19dac5b-ac49-4a1c-8f88-baaa91565867_del" returned: 0 in 0.107s from (pid=9095) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374 2016-07-04 09:17:40.935 INFO nova.virt.libvirt.driver [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Deleting instance files /opt/stack/data/nova/instances/b19dac5b-ac49-4a1c-8f88-baaa91565867_del 2016-07-04 09:17:40.953 INFO nova.virt.libvirt.driver [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Deletion of /opt/stack/data/nova/instances/b19dac5b-ac49-4a1c-8f88-baaa91565867_del complete 2016-07-04 09:17:40.961 DEBUG oslo_messaging._drivers.amqpdriver [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] CALL msg_id: e7df88d3221e4396b92067cd30a20328 exchange 'nova' topic 'conductor' from (pid=9095) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:448 2016-07-04 09:17:41.742 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: e7df88d3221e4396b92067cd30a20328 from (pid=9095) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:296 2016-07-04 09:17:41.749 INFO nova.compute.manager [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Took 0.93 seconds to destroy the instance on the hypervisor. 2016-07-04 09:17:41.751 DEBUG nova.compute.claims [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] [instance: b19dac5b-ac49-4a1c-8f88-baaa91565867] Aborting claim: [Claim: 2048 MB memory, 20 GB disk] from (pid=9095) abort /opt/stack/nova/nova/compute/claims.py:122 2016-07-04 09:17:41.753 DEBUG oslo_concurrency.lockutils [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] Lock "compute_resources" acquired by "nova.compute.resource_tracker.abort_instance_claim" :: waited 0.000s from (pid=9095) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270 2016-07-04 09:17:41.760 DEBUG oslo_messaging._drivers.amqpdriver [req-5915d6e6-5cfe-419f-bfb0-f5ba4d80e2aa admin admin] CALL msg_id: db5cf55bd8a441a49662af0af553d1a3 exchange 'nova' topic 'conductor' from (pid=9095) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:448 2016-07-04 09:17:42.233 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: db5cf55bd8a441a49662af0af553d1a3 from (pid=9095) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:296 2