Instances shutdown unexpectedly on Ubuntu 14.04

Bug #1385094 reported by James Page
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
nova-compute-flex (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

Log data:

2014-10-24 08:08:54.344 26791 INFO nova.compute.resource_tracker [-] Compute_service record updated for 76jay:76jay
2014-10-24 08:09:12.425 26791 AUDIT nova.compute.manager [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Starting instance...
2014-10-24 08:09:12.574 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Attempting claim: memory 64 MB, disk 1 GB
2014-10-24 08:09:12.575 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Total memory: 16031 MB, used: 512.00 MB
2014-10-24 08:09:12.576 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] memory limit: 80155.00 MB, free: 79643.00 MB
2014-10-24 08:09:12.576 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Total disk: 40 GB, used: 0.00 GB
2014-10-24 08:09:12.577 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] disk limit not specified, defaulting to unlimited
2014-10-24 08:09:12.594 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Claim successful
2014-10-24 08:09:12.870 26791 INFO nova.scheduler.client.report [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:13.031 26791 INFO nova.scheduler.client.report [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:17.394 26791 INFO nova.scheduler.client.report [-] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:35.610 26791 INFO ncflex.nova.virt.flex.containers [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] Starting unprivileged container
2014-10-24 08:09:55.255 26791 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-10-24 08:09:55.365 26791 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 16031, total allocated virtual ram (MB): 576
2014-10-24 08:09:55.365 26791 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 39
2014-10-24 08:09:55.366 26791 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 1, total allocated vcpus: 0
2014-10-24 08:09:55.366 26791 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2014-10-24 08:09:55.409 26791 INFO nova.scheduler.client.report [-] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:55.410 26791 INFO nova.compute.resource_tracker [-] Compute_service record updated for 76jay:76jay
2014-10-24 08:10:30.901 26791 AUDIT nova.compute.manager [req-637fd204-bf4e-4b49-8a73-698e41ece6f2 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Get console output
2014-10-24 08:10:39.270 26791 WARNING nova.compute.manager [-] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, current DB power_state: 4, current VM power_state: 4
2014-10-24 08:10:39.631 26791 INFO nova.compute.manager [req-a12cc078-fe01-4714-9df3-4cec5a64554b None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Instance is already powered off in the hypervisor when stop is called.
2014-10-24 08:10:57.256 26791 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-10-24 08:10:57.358 26791 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 16031, total allocated virtual ram (MB): 576
2014-10-24 08:10:57.359 26791 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 39
2014-10-24 08:10:57.360 26791 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 1, total allocated vcpus: 0
2014-10-24 08:10:57.360 26791 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2014-10-24 08:10:57.394 26791 INFO nova.scheduler.client.report [-] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:10:57.395 26791 INFO nova.compute.resource_tracker [-] Compute_service record updated for 76jay:76jay

James Page (james-page)
tags: added: juno scale-testing
Revision history for this message
James Page (james-page) wrote :
Download full text (3.7 KiB)

  lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor.c:mount_feature_enabled:54 - Operation not permitted - Error mounting sysfs
  lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:186 - If you really want to start this container, set
  lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:187 - lxc.aa_allow_incomplete = 1
  lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:188 - in your container configuration file
  lxc_container 1414138175.975 ERROR lxc_sync - sync.c:__sync_wait:51 - invalid sequence number 1. expected 4
  lxc_container 1414138175.975 ERROR lxc_start - start.c:__lxc_start:1087 - failed to spawn '92fe1ee2-519f-4335-b939-cec133a3f2ca'
  lxc_container 1414138175.976 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing hugetlb:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing perf_event:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing blkio:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing freezer:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing devices:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing memory:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing cpuacct:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:503 - call to cgmanager_remove_sync failed: invalid request
  lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.c:cgm_remove_cgroup:505 - Error removing cpu:92fe1ee2-519f-4335-b939-cec133a3f2ca
  lxc_c...

Read more...

summary: - Instances shutdown unexpectedly
+ Instances shutdown unexpectedly on Ubuntu 14.04
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in nova-compute-flex (Ubuntu):
status: New → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.