juju block storage on ec2 does not default to ebs-volumes

Bug #1566414 reported by Bruno Ranieri
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
juju-core
Invalid
High
Andrew Wilkins

Bug Description

When deploying a charm with storage on ec2, the default block-type storage is ebs-volume:

>'If pool is not specified, then Juju will select the default storage provider for the current environment (e.g. cinder for openstack, ebs for ec2, loop for local)'
https://jujucharms.com/docs/1.25/storage

For a charm with (metadata.yaml):
> [...]
> storage:
> registry-storage:
> type: block
> description: registry storage
> minimum-size: 10G
> multiple:
> range: '1'

The execution of
> $ juju switch amazon
> $ juju bootstrap
> $ juju deploy --repository=. local:charm-name

Fails with:
> $ juju debug-log --replay
> [...]
> machine-1[14401]: 2016-04-05 12:06:01 ERROR juju.worker runner.go:223
> exited "deployer": cannot create agent
> config dir "/var/lib/juju/agents/unit-charm-name-0": mkdir
> /var/lib/juju/agents/unit-charm-name-0: no space left on device
>

Debugging into the deployed machine (and ec2-console) shows that no
ebs-volume was created but an image on the local disk:

> $ juju ssh 1
> ubuntu at ip-x.y.z.a:~$ df -h
> Filesystem Size Used Avail Use% Mounted on
> udev 1.9G 12K 1.9G 1% /dev
> tmpfs 375M 204K 375M 1% /run
> /dev/disk/by-label/cloudimg-rootfs 7.8G 7.8G 0 100% /
> none 4.0K 0 4.0K 0% /sys/fs/cgroup
> none 5.0M 0 5.0M 0% /run/lock
> none 1.9G 0 1.9G 0% /run/shm
> none 100M 0 100M 0% /run/user
> /dev/xvdb 3.9G 8.1M 3.7G 1% /mnt
>

Using parameter '--storage registry-storage=ebs' is a work-around only feasible for the command-line.
For deployment during test-execution (e.g. 'juju test' -runner) the storage provider cannot be hard-coded, to preserve the independence of execution/deployment environment (e.g. MAAS).

Environment:
- charmbox https://github.com/juju-solutions/charmbox
> $ juju --version
> 1.25.3-trusty-amd64
> $ cat ~/.juju/environments.yaml
>default: local
>environments:
> local:
> type: local
> admin-secret: xyz
> lxc-clone: true
> allow-lxc-loop-mounts: true
> default-series: trusty
>
> amazon:
> type: ec2
> region: eu-central-1
> access-key: xyz
> secret-key: xyz
> admin-secret: xyz
> default-series: trusty

Related Mail: https://lists.ubuntu.com/archives/juju/2016-April/006979.html

Revision history for this message
Andrew Wilkins (axwalk) wrote :

Bruno, I hope I've answered your questions sufficiently on the list. The documentation has been improved somewhat for 2.0; if you have time, it would be great if you could take a look and let us know if it is clearer to you:
    https://jujucharms.com/docs/devel/charms-storage

In particular, I think this bit should make things a little bit clearer:

"Preparing storage

By default, charms with storage requirements will allocate those resources on the root filesystem of the unit where they are deployed. To make use of additional storage resources, Juju needs to know what they are. Some providers (e.g. EC2) support generic default storage pools (see the documentation on provider support), but in the case of no default support or a desire to be more specific, use the juju storage pool create subcommand to create storage."

That talks about "root filesystem", and not block devices. For block devices, by default the charm will allocate a loop device. You must specify *some* constraints to get the provider's native storage: EBS, Cinder, etc. The idea behind this is to avoid surprising the user by silently allocating resources (disks) that cost them money. By specifying a constraint, they're making a conscious decision about allocating storage.

Changed in juju-core:
status: New → In Progress
importance: Undecided → High
assignee: nobody → Andrew Wilkins (axwalk)
Revision history for this message
Anastasia (anastasia-macmood) wrote :

Marking as Invalid as according to Andrew's comment this is an expected behavior.

Changed in juju-core:
status: In Progress → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.