I've run that image on Brightbox Cloud and it takes 5 mins 25 seconds from
issuing the 'create' command to the ssh being available with a 2TB disk. So
that's including all the server provisioning, cloud-init key metadata stuff
and the initial login delay as well as the resize.
The output is correct:
ubuntu@srv-cdr4c:~$ df -h /dev/vda1
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 2.0T 771M 1.9T 1% /
If you do the same with the standard precise image from 'cloud-images' then
you get:
ubuntu@srv-pkjet:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 1.4T 772M 1.3T 1% /
The cloudinit logs have:
2013-02-15 14:26:47,275 - cc_resizefs.py[WARNING]: Failed to resize
filesystem (['resize2fs', '/run/cloudinit.resizefs.sziNUX'])
2013-02-15 14:26:47,280 - cc_resizefs.py[WARNING]: output=Filesystem at
/run/cloudinit.resizefs.sziNUX is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 128
Performing an on-line resize of /run/cloudinit.resizefs.sziNUX to 536868202
(4k) blocks.
error=resize2fs 1.42 (29-Nov-2011)
resize2fs: Operation not permitted While trying to add group #11264
And the file system is now corrupt as well.
So the corrected resize entry fixes the problem.
The build is somewhat slower than the usual 50 secs for a 20G server with
the standard image, but still with the bounds of usability I think.
A nice to have would be a 'resizing disk...' message on the console. The
console on our cloud is available as soon as the VM starts and you can
watch the boot process.
Tests linux mag did a few years ago suggest that the 64MB journal should be
fine unless you require lots of little files in lots of directories: http://www.linux-mag.com/id/7666/ Admittedly with a smaller disk.
On 15 February 2013 00:15, Ben Howard <email address hidden> wrote:
> In attempting to repo this using real disks on EC2 (4x 450GB ephemeral
> storage devices in LVM configuration), the resize times are truly
> dreadful. I have to question the utility in resizing a small cloud image
> to a massive image as it took nearly 45 minutes to do an online resize.
>
> But I have been unable to replicate this.
>
> With that said, Neil, can you test a build with the resize=... applied:
>
> http://people.canonical.com/~ben/drops/precise-server-cloudimg-amd64-disk1.img
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/955272
>
> Title:
> resize2fs fail with very large disks from small source image
>
> To manage notifications about this bug go to:
>
> https://bugs.launchpad.net/ubuntu/+source/cloud-initramfs-tools/+bug/955272/+subscriptions
>
Hi Ben,
Thanks very much for testing this out.
I've run that image on Brightbox Cloud and it takes 5 mins 25 seconds from
issuing the 'create' command to the ssh being available with a 2TB disk. So
that's including all the server provisioning, cloud-init key metadata stuff
and the initial login delay as well as the resize.
The output is correct:
ubuntu@srv-cdr4c:~$ df -h /dev/vda1
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 2.0T 771M 1.9T 1% /
If you do the same with the standard precise image from 'cloud-images' then
you get:
ubuntu@srv-pkjet:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 1.4T 772M 1.3T 1% /
The cloudinit logs have:
2013-02-15 14:26:47,275 - cc_resizefs. py[WARNING] : Failed to resize .resizefs. sziNUX' ]) py[WARNING] : output=Filesystem at resizefs. sziNUX is mounted on /; on-line resizing required resizefs. sziNUX to 536868202
filesystem (['resize2fs', '/run/cloudinit
2013-02-15 14:26:47,280 - cc_resizefs.
/run/cloudinit.
old_desc_blocks = 1, new_desc_blocks = 128
Performing an on-line resize of /run/cloudinit.
(4k) blocks.
error=resize2fs 1.42 (29-Nov-2011)
resize2fs: Operation not permitted While trying to add group #11264
And the file system is now corrupt as well.
So the corrected resize entry fixes the problem.
The build is somewhat slower than the usual 50 secs for a 20G server with
the standard image, but still with the bounds of usability I think.
A nice to have would be a 'resizing disk...' message on the console. The
console on our cloud is available as soon as the VM starts and you can
watch the boot process.
Tests linux mag did a few years ago suggest that the 64MB journal should be www.linux- mag.com/ id/7666/ Admittedly with a smaller disk.
fine unless you require lots of little files in lots of directories:
http://
On 15 February 2013 00:15, Ben Howard <email address hidden> wrote:
> In attempting to repo this using real disks on EC2 (4x 450GB ephemeral people. canonical. com/~ben/ drops/precise- server- cloudimg- amd64-disk1. img /bugs.launchpad .net/bugs/ 955272 /bugs.launchpad .net/ubuntu/ +source/ cloud-initramfs -tools/ +bug/955272/ +subscriptions
> storage devices in LVM configuration), the resize times are truly
> dreadful. I have to question the utility in resizing a small cloud image
> to a massive image as it took nearly 45 minutes to do an online resize.
>
> But I have been unable to replicate this.
>
> With that said, Neil, can you test a build with the resize=... applied:
>
> http://
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https:/
>
> Title:
> resize2fs fail with very large disks from small source image
>
> To manage notifications about this bug go to:
>
> https:/
>
--
Neil Wilson