MAAS 2 Storage Problem

Bug #1642236 reported by Michael Foord
This bug report is a duplicate of:  Bug #1677001: Storage remains in pending state. Edit Remove
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
High
Unassigned

Bug Description

Reported by Shilpa Kaul.

Now I have upgraded to MAAS 2.0 and then trying my charm deployment. This time I am not getting the curtin issue and MAAS status is saying "Deployed" and machine status is started. The issue I am facing is that juju agent does not start if I deploy the charm with --storage option = maas. My previous issue is resolved, but this is the new issue I am seeing with MAAS 2.0.
The juju agent starts successfully if I dont make use of --storage parameter or use --storage disks=loop, this behavior is seen only when --storage disks=maas is used.

The Prometheus charm supports storage so you can test with:

  juju deploy cs:~prometheus-charmers/prometheus --storage metrics-filesystem=maas

And then check that the unit installs properly.

# juju storage --volume --format=yaml
volumes:
  10/6:
    provider-id: volume-10-6
    storage: disks/6
    attachments:
      machines:
        "10":
          device: loop0
          read-only: false
      units:
        ibm-spectrum-scale-manager/2:
          machine: "10"
          location: /dev/loop0
    size: 1024
    persistent: false
    status:
      current: attached
      since: 09 Nov 2016 10:14:02+05:30
  11/7:
    provider-id: volume-11-7
    storage: disks/7
    attachments:
      machines:
        "11":
          device: loop0
          read-only: false
      units:
        ibm-spectrum-scale-manager/3:
          machine: "11"
          location: /dev/loop0
    size: 1024
    persistent: false
    status:
      current: attached
      since: 09 Nov 2016 10:53:45+05:30

ubuntu@vm6:~$ sudo lsblk --json
{
   "blockdevices": [
      {"name": "sda", "maj:min": "8:0", "rm": "0", "size": "30G", "ro": "0", "type": "disk", "mountpoint": null,
         "children": [
            {"name": "sda1", "maj:min": "8:1", "rm": "0", "size": "30G", "ro": "0", "type": "part", "mountpoint": "/"}
         ]
      },
      {"name": "sdb", "maj:min": "8:16", "rm": "0", "size": "30G", "ro": "0", "type": "disk", "mountpoint": null,
         "children": [
            {"name": "sdb1", "maj:min": "8:17", "rm": "0", "size": "30G", "ro": "0", "type": "part", "mountpoint": null}
         ]
      },
      {"name": "sr0", "maj:min": "11:0", "rm": "1", "size": "1024M", "ro": "0", "type": "rom", "mountpoint": null},
      {"name": "loop0", "maj:min": "7:0", "rm": "0", "size": "1G", "ro": "0", "type": "loop", "mountpoint": null}
   ]
}

Revision history for this message
Michael Foord (mfoord) wrote :

I can reproduce this bug. The deployed machine never gets beyond "agent initializing". Nothing immediately obvious in the logs.

Revision history for this message
Michael Foord (mfoord) wrote :

When I try this with MAAS 1.9 it also fails to deploy, with the following error in the logs:

machine-0: 11:24:49 WARNING juju.provisioner starting instance: invalid device id "/dev/vdb"
machine-0: 11:25:00 WARNING juju.provisioner starting instance: invalid device id "/dev/vdb"
machine-0: 11:25:11 WARNING juju.provisioner starting instance: invalid device id "/dev/vdb"
machine-0: 11:25:23 ERROR juju.provisioner cannot start instance for machine "0": invalid device id "/dev/vdb"

Revision history for this message
Andrew Wilkins (axwalk) wrote :

Is that using KVM/vMAAS? It looks like there's a faulty assumption about what gets populated into id_path by MAAS. Possibly something changed after MAAS 1.8? If it's in a format like "/dev/vdb", then we should be using that as the device name in the VolumeAttachmentInfo.

Changed in juju:
status: In Progress → Triaged
assignee: Michael Foord (mfoord) → nobody
milestone: 2.1.0 → 2.2.0
Curtis Hovey (sinzui)
Changed in juju:
milestone: 2.2-beta1 → 2.2-beta2
Curtis Hovey (sinzui)
Changed in juju:
milestone: 2.2-beta2 → 2.2-beta3
Changed in juju:
milestone: 2.2-beta3 → 2.2-beta4
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.