Openstack Queens on CentOS, Magnum is always failing cluster creation with the following errors:
{u'enable_prometheus_monitoring_deployment': u'CREATE aborted (Task create from SoftwareDeployment "enable_prometheus_monitoring_deployment" Stack "kubernetes-cluster-gsllydg6tqcl-kube_masters-exq6am3fngts-0-xeu4xrscjyqu" [77ef867c-e356-4091-a544-9761c114cfcd] Timed out)', u'kube_masters': u'CREATE aborted (Task create from ResourceGroup "kube_masters" Stack "kubernetes-cluster-gsllydg6tqcl" [5315420c-58b5-4278-a0cf-e76762fa6ed1] Timed out)', u'calico_service_deployment': u'CREATE aborted (Task create from SoftwareDeployment "calico_service_deployment" Stack "kubernetes-cluster-gsllydg6tqcl-kube_masters-exq6am3fngts-0-xeu4xrscjyqu" [77ef867c-e356-4091-a544-9761c114cfcd] Timed out)', u'0': u'resources[0]: Stack CREATE cancelled', u'enable_cert_manager_api_deployment': u'CREATE aborted (Task create from SoftwareDeployment "enable_cert_manager_api_deployment" Stack "kubernetes-cluster-gsllydg6tqcl-kube_masters-exq6am3fngts-0-xeu4xrscjyqu" [77ef867c-e356-4091-a544-9761c114cfcd] Timed out)', u'kubernetes_dashboard_deployment': u'CREATE aborted (Task create from SoftwareDeployment "kubernetes_dashboard_deployment" Stack "kubernetes-cluster-gsllydg6tqcl-kube_masters-exq6am3fngts-0-xeu4xrscjyqu" [77ef867c-e356-4091-a544-9761c114cfcd] Timed out)', u'enable_ingress_controller_deployment': u'CREATE aborted (Task create from SoftwareDeployment "enable_ingress_controller_deployment" Stack "kubernetes-cluster-gsllydg6tqcl-kube_masters-exq6am3fngts-0-xeu4xrscjyqu" [77ef867c-e356-4091-a544-9761c114cfcd] Timed out)'}
I looked inside the master VM and saw this in the logs:
/var/log/cloud-init.log:cloudinit.util.ProcessExecutionError: Unexpected error while running command.
/var/log/cloud-init.log-Command: ['/var/lib/cloud/instance/scripts/part-009']
/var/log/cloud-init-output.log- New size given (1280 extents) not larger than existing size (9983 extents)
/var/log/cloud-init-output.log:ERROR: There is not enough free space in volume group atomicos to create data volume of size MIN_DATA_SIZE=2
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root atomicos -wi-ao---- <39.00g
In my Pike setup it's different, but I see that a lot of things changed since then:
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool atomicos twi-a-t--- 13.95g 0.00 0.16
root atomicos -wi-ao---- 5.00g
The cluster was created exactly as written in manual with the only difference that I tried medium flavor for the master node in case it's a free space problem.
# yum list | grep magnum
openstack-magnum-api.noarch 6.1.0-1.el7 @centos-openstack-queens
openstack-magnum-common.noarch 6.1.0-1.el7 @centos-openstack-queens
openstack-magnum-conductor.noarch 6.1.0-1.el7 @centos-openstack-queens
openstack-magnum-ui.noarch 4.0.0-1.el7 @centos-openstack-queens
python-magnum.noarch 6.1.0-1.el7 @centos-openstack-queens
Noticed this happens for both Kubernetes and Swarm coes.