20 VM startup using new snapshot = general error mounting filesystems

Bug #893494 reported by Gavin B
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

I have a test that creates an instance from a base image that I have independently verified can boot correctly, then snapshots, then uses the new snapshot to boot 20 instances.

The test polls the OSAPI interface for the status of the booting "master" server every 5 seconds, then snapshots as soon as it sees it is 'ACTIVE', waits until the snapshot has been properly saved, then tears down the master. After that is sets off 20 VMs one after another using the snapshot.

This has generally worked, however I've hit an issue where all 20 VMs have failed with a "General error mounting filesystems"

console output shows :

[ 1.385242] md: ... autorun DONE.
[ 1.386256] EXT3-fs (vda): error: couldn't mount because of unsupported optional features (240)
[ 1.450422] EXT2-fs (vda): error: couldn't mount because of unsupported optional features (240)
[ 1.513197] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null)
[ 1.514564] VFS: Mounted root (ext4 filesystem) readonly on device 252:0.
[ 1.516571] devtmpfs: mounted
[ 1.519673] Freeing unused kernel memory: 880k freed
[ 1.521106] Write protecting the kernel read-only data: 10240k
[ 1.523233] Freeing unused kernel memory: 80k freed
[ 1.531096] Freeing unused kernel memory: 1412k freed
lxcmount stop/pre-start, process 57
init: mountall main process (61) killed by FPE signal
General error mounting filesystems.
A maintenance shell will now be started.
CONTROL-D will terminate this shell and reboot the system.
Give root password for maintenance
(or type Control-D to continue):

I suspect the snapshot is getting corrupted either because there's a timing issue (saving too early) or because something in the path through Glance is broken - we've made some recent changes there.

nova version = 2011.3-1 + patches (hp2.1)

Revision history for this message
Thierry Carrez (ttx) wrote :

Any error or the nova-compute side when this happens ? Do you reproduce with KVM instead of LXC ?

Changed in nova:
status: New → Incomplete
Revision history for this message
Thierry Carrez (ttx) wrote :

We cannot solve the issue you reported without more information. Could you please provide the requested information ?

Revision history for this message
Thierry Carrez (ttx) wrote :

This bug lacks the necessary information to effectively reproduce and fix it, therefore it has been closed. Feel free to reopen the bug by providing the requested information and set the bug status back to ''New''.

Changed in nova:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.