[2.4, pod] RAM usage of Pod is not recognized as a constraint

Bug #1768131 reported by Markus Fritsche
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MAAS
Fix Released
High
Newell Jensen

Bug Description

Packages:

root@picard:~# dpkg -l '*maas*'|cat
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===============================-====================================-============-=================================================
ii maas 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all "Metal as a Service" is a physical cloud and IPAM
ii maas-cli 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all MAAS client and command-line interface
ii maas-common 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all MAAS server common files
ii maas-dhcp 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all MAAS DHCP server
ii maas-proxy 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all MAAS Caching Proxy
ii maas-rack-controller 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all Rack Controller for MAAS
ii maas-region-api 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all Region controller API service for MAAS
ii maas-region-controller 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all Region Controller for MAAS
ii python3-django-maas 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all MAAS server Django web framework (Python 3)
ii python3-maas-client 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all MAAS python API client (Python 3)
ii python3-maas-provisioningserver 2.4.0~beta2-6865-gec43e47e6-0ubuntu1 all MAAS server provisioning libraries (Python 3)

How to replicate:
I have four physical machines for which I used MAAS to deploy Ubuntu-Server 18.04 on. I locked these machines, installed kvm etc., created a bridge and added Pods on those machines. Each machine has 16GiB RAM, 4 CPU Cores (2 physical x 2 hyperthreading) and 120GB of SSD disc space (it's a little cluster toy setup initially created with NUCs back in 2014).

I am using conjure-up (juju) to install the "Hadoop + Spark" charms.

All virtual machines are created on the same pod (which might be a conjure-up/ juju issue):

Name: s2031
Type: Virsh (virtual systems)
Local storage (GiB) -61.8 free of 109.5
iSCSI storage (GiB): 0 free of 0
Cores: -4 free of 4
RAM: -24.9 free of 15.6

Over Commit Ratio is set to "1" for Memory and Cores.

Related branches

description: updated
Changed in maas:
milestone: none → 2.4.0rc1
summary: - RAM usage of Pod is not recognized as a constraint
+ [2.4, pod] RAM usage of Pod is not recognized as a constraint
Changed in maas:
assignee: nobody → Newell Jensen (newell-jensen)
Changed in maas:
status: New → In Progress
Changed in maas:
importance: Undecided → High
no longer affects: maas/2.3
Changed in maas:
milestone: 2.4.0rc1 → 2.4.0rc2
Changed in maas:
status: In Progress → Fix Committed
Changed in maas:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.