Ansible playbook command ran first time localhost:~$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml PLAY [bootstrap] *************************************************************** TASK [include_vars] ************************************************************ ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [prepare-env : Update SSH known hosts] ************************************ TASK [prepare-env : Check connectivity] **************************************** TASK [prepare-env : Fail if host is unreachable] ******************************* TASK [prepare-env : debug] ***************************************************** TASK [prepare-env : Change initial password] *********************************** TASK [prepare-env : Look for unmistakenly StarlingX package] ******************* changed: [localhost] TASK [prepare-env : Fail if host is not running the right image] *************** TASK [prepare-env : Check initial config flag] ********************************* ok: [localhost] TASK [prepare-env : Set skip_play flag for host] ******************************* TASK [prepare-env : Skip remaining tasks if host is already unlocked] ********** TASK [prepare-env : Fail if any of the mandatory configurations are not defined] *** TASK [prepare-env : Set initial address facts if not defined. They will be updated later] *** ok: [localhost] TASK [prepare-env : Set docker registries to default values if not specified] *** TASK [prepare-env : Initialize some flags to be used in subsequent roles/tasks] *** ok: [localhost] TASK [prepare-env : Set initial facts] ***************************************** ok: [localhost] TASK [prepare-env : Turn on use_docker_proxy flag] ***************************** TASK [prepare-env : Set default values for docker proxies if not defined] ****** ok: [localhost] TASK [prepare-env : Retrieve software version number] ************************** changed: [localhost] TASK [prepare-env : Fail if software version is not defined] ******************* TASK [prepare-env : Retrieve system type] ************************************** changed: [localhost] TASK [prepare-env : Fail if system type is not defined] ************************ TASK [prepare-env : Set software version, system type config path facts] ******* ok: [localhost] TASK [prepare-env : Set config path facts] ************************************* ok: [localhost] TASK [prepare-env : Check Docker status] *************************************** changed: [localhost] TASK [prepare-env : Look for openrc file] ************************************** ok: [localhost] TASK [prepare-env : Turn on replayed flag] ************************************* TASK [prepare-env : Check if the controller-0 host has been successfully provisioned] *** TASK [prepare-env : Set flag to indicate that this host has been previously configured] *** TASK [prepare-env : Find previous config file for this host] ******************* TASK [prepare-env : Fetch previous config file from this host] ***************** TASK [prepare-env : Read in last config values] ******************************** TASK [prepare-env : Turn on system attributes reconfiguration flag] ************ TASK [prepare-env : Turn on docker reconfiguration flag if docker config is changed] *** TASK [prepare-env : Turn on service endpoints reconfiguration flag if management and/or oam network config is changed] *** TASK [prepare-env : Turn on network reconfiguration flag if any of the network related config is changed] *** TASK [prepare-env : Turn on restart services flag if management/oam/cluster network or docker config is changed] *** TASK [prepare-env : Turn off save_password flag if admin password has not changed] *** TASK [prepare-env : Turn off save_config flag if system, network, and docker configurations have not changed] *** TASK [prepare-env : debug] ***************************************************** TASK [prepare-env : Turn on skip_play flag] ************************************ TASK [prepare-env : Check volume groups] *************************************** changed: [localhost] TASK [prepare-env : Fail if volume groups are not configured] ****************** TASK [prepare-env : Check size of root disk] *********************************** changed: [localhost] TASK [prepare-env : Update root disk index for remote play] ******************** TASK [prepare-env : Set root disk and root disk size facts] ******************** ok: [localhost] TASK [prepare-env : debug] ***************************************************** ok: [localhost] => { "msg": "[WARNING]: Root disk /dev/sda size is 447GB which is less than the standard size of 500GB. Please consult the Software Installation Guide for details." } TASK [prepare-env : Look for branding tar file] ******************************** ok: [localhost] TASK [prepare-env : Fail if there are more than one branding tar files] ******** TASK [prepare-env : Look for other branding files] ***************************** ok: [localhost] TASK [prepare-env : Fail if the branding filename is not valid] **************** TASK [prepare-env : Mark environment as Ansible bootstrap] ********************* changed: [localhost] TASK [prepare-env : debug] ***************************************************** ok: [localhost] => { "msg": "system_config_update flag: False, network_config_update flag: False, docker_config_update flag: False, restart_services flag: False, endpoints_reconfiguration_flag: False, save_password flag: True, save_config flag: True, skip_play flag: False" } TASK [validate-config : debug] ************************************************* ok: [localhost] => { "msg": [ "System mode is duplex", "Timezone is UTC", "DNS servers is [u'192.168.90.60']", "PXE boot subnet is 169.254.202.0/24", "Management subnet is 10.10.62.0/24", "Cluster host subnet is 192.168.206.0/24", "Cluster pod subnet is 172.16.0.0/16", "Cluster service subnet is 10.96.0.0/12", "OAM subnet is 192.168.90.0/24", "OAM gateway is 192.168.90.1", "OAM floating ip is 192.168.90.240", "Dynamic address allocation is True", "Docker registries is [u'192.168.90.60']", "Docker HTTP proxy is undef", "Docker HTTPS proxy is undef", "Docker no proxy list is []" ] } TASK [validate-config : Set system mode fact] ********************************** ok: [localhost] TASK [validate-config : debug] ************************************************* ok: [localhost] => { "msg": "System type is Standard, system mode will be set to duplex." } TASK [validate-config : Set system mode to duplex for Standard system] ********* ok: [localhost] TASK [validate-config : Validate system mode if system type is All-in-one] ***** TASK [validate-config : Checking registered timezones] ************************* ok: [localhost] TASK [validate-config : Fail if provided timezone is unknown] ****************** TASK [validate-config : Fail if the number of dns servers provided is not at least 1 and no more than 3] *** TASK [validate-config : include] *********************************************** included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_dns.yml for localhost => (item=192.168.90.60) TASK [validate-config : Check format of DNS Server IP] ************************* ok: [localhost] => { "msg": "DNS Server: 192.168.90.60" } TASK [validate-config : Perform ping test] ************************************* changed: [localhost] TASK [validate-config : Fail if DNS Server is unreachable] ********************* TASK [validate-config : Validate provided subnets (both IPv4 & IPv6 notations)] *** ok: [localhost] => (item={'value': u'192.168.90.0/24', 'key': u'external_oam_subnet'}) => { "msg": "external_oam_subnet: 192.168.90.0/24" } ok: [localhost] => (item={'value': u'192.168.206.0/24', 'key': u'cluster_host_subnet'}) => { "msg": "cluster_host_subnet: 192.168.206.0/24" } ok: [localhost] => (item={'value': u'192.168.90.1', 'key': u'external_oam_gateway_address'}) => { "msg": "external_oam_gateway_address: 192.168.90.1" } ok: [localhost] => (item={'value': u'10.96.0.0/12', 'key': u'cluster_service_subnet'}) => { "msg": "cluster_service_subnet: 10.96.0.0/12" } ok: [localhost] => (item={'value': u'169.254.202.0/24', 'key': u'pxeboot_subnet'}) => { "msg": "pxeboot_subnet: 169.254.202.0/24" } ok: [localhost] => (item={'value': u'239.1.1.0/28', 'key': u'management_multicast_subnet'}) => { "msg": "management_multicast_subnet: 239.1.1.0/28" } ok: [localhost] => (item={'value': u'10.10.62.0/24', 'key': u'management_subnet'}) => { "msg": "management_subnet: 10.10.62.0/24" } ok: [localhost] => (item={'value': u'172.16.0.0/16', 'key': u'cluster_pod_subnet'}) => { "msg": "cluster_pod_subnet: 172.16.0.0/16" } ok: [localhost] => (item={'value': u'192.168.90.240', 'key': u'external_oam_floating_address'}) => { "msg": "external_oam_floating_address: 192.168.90.240" } TASK [validate-config : Fail if cluster pod/service subnet size is too small (minimum size = 65536)] *** TASK [validate-config : Fail if pxeboot/management/multicast subnet size is too small (minimum size = 16)] *** TASK [validate-config : Fail if the size of the remaining subnets is too small (minimum size = 8)] *** TASK [validate-config : Generate warning if subnet prefix is not typical for Standard systems] *** ok: [localhost] => (item=239.1.1.0/28) => { "msg": "WARNING: Subnet prefix of less than /24 is not typical. This will affect scaling of the system!" } TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Fail if IPv6 management on simplex] ******************** TASK [validate-config : Fail if IPv6 prefix length is too short] *************** TASK [validate-config : Update localhost name ip mapping for IPv6] ************* TASK [validate-config : Fail if address allocation is misconfigured] *********** TASK [validate-config : Set default start and end addresses based on provided subnets] *** ok: [localhost] TASK [validate-config : Build address pairs for validation, merging default and user provided values] *** ok: [localhost] TASK [validate-config : include] *********************************************** included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'oam_node', 'value': {u'use_default': False, u'start': u'192.168.90.106', u'end': u'192.168.90.105', u'subnet': u'192.168.90.0/24'}}) included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'cluster_pod', 'value': {u'use_default': True, u'start': u'172.16.0.1', u'end': u'172.16.255.254', u'subnet': u'172.16.0.0/16'}}) included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'management', 'value': {u'use_default': True, u'start': u'10.10.62.2', u'end': u'10.10.62.254', u'subnet': u'10.10.62.0/24'}}) included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'multicast', 'value': {u'use_default': True, u'start': u'239.1.1.1', u'end': u'239.1.1.14', u'subnet': u'239.1.1.0/28'}}) included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'cluster_service', 'value': {u'use_default': True, u'start': u'10.96.0.1', u'end': u'10.111.255.254', u'subnet': u'10.96.0.0/12'}}) included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'oam', 'value': {u'use_default': True, u'start': u'192.168.90.1', u'end': u'192.168.90.254', u'subnet': u'192.168.90.0/24'}}) included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'cluster_host', 'value': {u'use_default': True, u'start': u'192.168.206.2', u'end': u'192.168.206.254', u'subnet': u'192.168.206.0/24'}}) included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address_range.yml for localhost => (item={'key': u'pxeboot', 'value': {u'use_default': True, u'start': u'169.254.202.2', u'end': u'169.254.202.254', u'subnet': u'169.254.202.0/24'}}) TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate oam_node start and end address format] ******** ok: [localhost] => { "msg": "oam_node: 192.168.90.106 192.168.90.105" } TASK [validate-config : Validate oam_node start and end range] ***************** TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate cluster_pod start and end address format] ***** TASK [validate-config : Validate cluster_pod start and end range] ************** TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate management start and end address format] ****** TASK [validate-config : Validate management start and end range] *************** TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate multicast start and end address format] ******* TASK [validate-config : Validate multicast start and end range] **************** TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate cluster_service start and end address format] *** TASK [validate-config : Validate cluster_service start and end range] ********** TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate oam start and end address format] ************* TASK [validate-config : Validate oam start and end range] ********************** TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate cluster_host start and end address format] **** TASK [validate-config : Validate cluster_host start and end range] ************* TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Validate pxeboot start and end address format] ********* TASK [validate-config : Validate pxeboot start and end range] ****************** TASK [validate-config : Fail if address range did not meet required criteria] *** TASK [validate-config : Set floating addresses based on subnets or start addresses] *** ok: [localhost] TASK [validate-config : Set derived facts for subsequent tasks/roles] ********** ok: [localhost] TASK [validate-config : Set facts for IP address provisioning against loopback interface] *** ok: [localhost] TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Set default no-proxy address list (simplex)] *********** TASK [validate-config : Set default no-proxy address list (non simplex)] ******* TASK [validate-config : Validate http proxy urls] ****************************** TASK [validate-config : Validate no proxy addresses] *************************** TASK [validate-config : Add user defined no-proxy address list to default] ***** TASK [validate-config : Fail if secure registry flag is misconfigured] ********* TASK [validate-config : Default the unified registry to secure if not specified] *** ok: [localhost] TASK [validate-config : Turn on use_unified_registry flag] ********************* ok: [localhost] TASK [validate-config : Update use_default_registries flag] ******************** ok: [localhost] TASK [validate-config : include] *********************************************** included: /usr/share/ansible/stx-ansible/playbooks/bootstrap/roles/validate-config/tasks/validate_address.yml for localhost => (item=192.168.90.60) TASK [validate-config : Check if the supplied address is a valid domain name or ipv4 address] *** changed: [localhost] TASK [validate-config : Check if the supplied address is of ipv6 with port format] *** TASK [validate-config : Fail if the supplied address is not a valid ipv6] ****** TASK [validate-config : Fail if the supplied address is not a valid ipv6 with port] *** TASK [validate-config : set_fact] ********************************************** ok: [localhost] TASK [validate-config : Check if images archive location exists] *************** TASK [validate-config : Get list of archived files] **************************** TASK [validate-config : Turn on images archive flag] *************************** TASK [validate-config : Create config workdir] ********************************* changed: [localhost] TASK [validate-config : Generate config ini file for python sysinv db population script] *** changed: [localhost] => (item=[BOOTSTRAP_CONFIG]) changed: [localhost] => (item=CONTROLLER_HOSTNAME=controller-0) changed: [localhost] => (item=SYSTEM_TYPE=Standard) changed: [localhost] => (item=SYSTEM_MODE=duplex) changed: [localhost] => (item=TIMEZONE=UTC) changed: [localhost] => (item=SW_VERSION=19.01) changed: [localhost] => (item=NAMESERVERS=192.168.90.60) changed: [localhost] => (item=PXEBOOT_SUBNET=169.254.202.0/24) changed: [localhost] => (item=PXEBOOT_START_ADDRESS=169.254.202.2) changed: [localhost] => (item=PXEBOOT_END_ADDRESS=169.254.202.254) changed: [localhost] => (item=MANAGEMENT_SUBNET=10.10.62.0/24) changed: [localhost] => (item=MANAGEMENT_START_ADDRESS=10.10.62.2) changed: [localhost] => (item=MANAGEMENT_END_ADDRESS=10.10.62.254) changed: [localhost] => (item=DYNAMIC_ADDRESS_ALLOCATION=True) changed: [localhost] => (item=MANAGEMENT_INTERFACE=lo) changed: [localhost] => (item=CONTROLLER_0_ADDRESS=10.10.62.3) changed: [localhost] => (item=CLUSTER_HOST_SUBNET=192.168.206.0/24) changed: [localhost] => (item=CLUSTER_HOST_START_ADDRESS=192.168.206.2) changed: [localhost] => (item=CLUSTER_HOST_END_ADDRESS=192.168.206.254) changed: [localhost] => (item=CLUSTER_POD_SUBNET=172.16.0.0/16) changed: [localhost] => (item=CLUSTER_POD_START_ADDRESS=172.16.0.1) changed: [localhost] => (item=CLUSTER_POD_END_ADDRESS=172.16.255.254) changed: [localhost] => (item=CLUSTER_SERVICE_SUBNET=10.96.0.0/12) changed: [localhost] => (item=CLUSTER_SERVICE_START_ADDRESS=10.96.0.1) changed: [localhost] => (item=CLUSTER_SERVICE_END_ADDRESS=10.96.0.1) changed: [localhost] => (item=EXTERNAL_OAM_SUBNET=192.168.90.0/24) changed: [localhost] => (item=EXTERNAL_OAM_START_ADDRESS=192.168.90.1) changed: [localhost] => (item=EXTERNAL_OAM_END_ADDRESS=192.168.90.254) changed: [localhost] => (item=EXTERNAL_OAM_GATEWAY_ADDRESS=192.168.90.1) changed: [localhost] => (item=EXTERNAL_OAM_FLOATING_ADDRESS=192.168.90.240) changed: [localhost] => (item=EXTERNAL_OAM_0_ADDRESS=192.168.90.106) changed: [localhost] => (item=EXTERNAL_OAM_1_ADDRESS=192.168.90.105) changed: [localhost] => (item=MANAGEMENT_MULTICAST_SUBNET=239.1.1.0/28) changed: [localhost] => (item=MANAGEMENT_MULTICAST_START_ADDRESS=239.1.1.1) changed: [localhost] => (item=MANAGEMENT_MULTICAST_END_ADDRESS=239.1.1.14) changed: [localhost] => (item=DOCKER_HTTP_PROXY=undef) changed: [localhost] => (item=DOCKER_HTTPS_PROXY=undef) changed: [localhost] => (item=DOCKER_NO_PROXY=) changed: [localhost] => (item=DOCKER_REGISTRIES=192.168.90.60) changed: [localhost] => (item=USE_DEFAULT_REGISTRIES=False) changed: [localhost] => (item=IS_SECURE_REGISTRY=True) changed: [localhost] => (item=RECONFIGURE_ENDPOINTS=False) TASK [validate-config : Write simplex flag] ************************************ changed: [localhost] TASK [store-passwd : debug] **************************************************** TASK [store-passwd : set_fact] ************************************************* TASK [store-passwd : Print warning if admin credentials are not stored in vault] *** ok: [localhost] => { "msg": "[WARNING: Default admin username and password (unencrypted) are used. Consider storing both of these variables in Ansible vault.]" } TASK [store-passwd : Set admin username and password facts] ******************** ok: [localhost] TASK [store-passwd : Look for password rules file] ***************************** ok: [localhost] TASK [store-passwd : Fail if password rules file is missing] ******************* TASK [store-passwd : Get password rules] *************************************** changed: [localhost] TASK [store-passwd : Get password rules description] *************************** changed: [localhost] TASK [store-passwd : Set password regex facts] ********************************* ok: [localhost] TASK [store-passwd : Fail if password regex cannot be found] ******************* TASK [store-passwd : Set password regex description fact] ********************** TASK [store-passwd : Validate admin password] ********************************** changed: [localhost] TASK [store-passwd : Fail if provided admin password does not meet required complexity] *** TASK [store-passwd : Store admin password] ************************************* changed: [localhost] TASK [apply-bootstrap-manifest : Create config workdir] ************************ changed: [localhost] TASK [apply-bootstrap-manifest : Generating static config data] **************** changed: [localhost] TASK [apply-bootstrap-manifest : Fail if static hieradata cannot be generated] *** TASK [apply-bootstrap-manifest : Applying puppet bootstrap manifest] *********** changed: [localhost] TASK [apply-bootstrap-manifest : debug] **************************************** ok: [localhost] => { "bootstrap_manifest": { "changed": true, "cmd": [ "/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "10.10.62.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log" ], "delta": "0:05:14.465197", "end": "2019-05-24 15:45:18.859843", "failed": false, "rc": 0, "start": "2019-05-24 15:40:04.394646", "stderr": "cp: cannot stat ‘/tmp/hieradata/10.10.62.3.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory\ncp: cannot stat ‘>’: No such file or directory", "stderr_lines": [ "cp: cannot stat ‘/tmp/hieradata/10.10.62.3.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory", "cp: cannot stat ‘>’: No such file or directory" ], "stdout": "Applying puppet ansible_bootstrap manifest...\n[DONE]", "stdout_lines": [ "Applying puppet ansible_bootstrap manifest...", "[DONE]" ] } } TASK [apply-bootstrap-manifest : Fail if puppet manifest apply script returns an error] *** TASK [apply-bootstrap-manifest : Ensure Puppet directory exists] *************** changed: [localhost] TASK [apply-bootstrap-manifest : Persist puppet working files] ***************** changed: [localhost] TASK [persist-config : Delete the previous python_keyring directory if exists] *** ok: [localhost] TASK [persist-config : Persist keyring data] *********************************** changed: [localhost] TASK [persist-config : Ensure replicated config parent directory exists] ******* changed: [localhost] TASK [persist-config : Get list of new config files] *************************** ok: [localhost] TASK [persist-config : Remove existing config files from permanent location] *** ok: [localhost] => (item={u'rusr': True, u'uid': 0, u'rgrp': True, u'xoth': False, u'islnk': False, u'woth': False, u'nlink': 1, u'issock': False, u'mtime': 1558712394.481447, u'gr_name': u'root', u'path': u'/tmp/config/bootstrap_config', u'xusr': False, u'atime': 1558712394.481447, u'inode': 105772, u'isgid': False, u'size': 1374, u'isdir': False, u'ctime': 1558712394.481447, u'roth': True, u'isblk': False, u'xgrp': False, u'isuid': False, u'dev': 34, u'wgrp': False, u'isreg': True, u'isfifo': False, u'mode': u'0644', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True}) ok: [localhost] => (item={u'rusr': True, u'uid': 0, u'rgrp': True, u'xoth': True, u'islnk': False, u'woth': False, u'nlink': 2, u'issock': False, u'mtime': 1558712401.5494463, u'gr_name': u'root', u'path': u'/tmp/config/ssh_config', u'xusr': True, u'atime': 1558712401.3644464, u'inode': 97089, u'isgid': False, u'size': 120, u'isdir': True, u'ctime': 1558712401.5494463, u'roth': True, u'isblk': False, u'xgrp': True, u'isuid': False, u'dev': 34, u'wgrp': False, u'isreg': False, u'isfifo': False, u'mode': u'0755', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True}) TASK [persist-config : Move new config files to permanent location] ************ changed: [localhost] TASK [persist-config : Delete working config directory] ************************ changed: [localhost] TASK [persist-config : Set Postgres, PXE, branding config directory fact] ****** ok: [localhost] TASK [persist-config : debug] ************************************************** ok: [localhost] => { "msg": "postgres_config_dir: /opt/platform/config/19.01/postgresql pxe_config_dir: /opt/platform/config/19.01/pxelinux.cfg branding_config_dir: /opt/platform/config/19.01/pxelinux.cfg" } TASK [persist-config : Ensure Postres, PXE config directories exist] *********** changed: [localhost] => (item=/opt/platform/config/19.01/postgresql) changed: [localhost] => (item=/opt/platform/config/19.01/pxelinux.cfg) TASK [persist-config : Get list of Postgres conf files] ************************ ok: [localhost] TASK [persist-config : Copy postgres conf files for mate] ********************** changed: [localhost] => (item={u'rusr': True, u'uid': 120, u'rgrp': True, u'xoth': False, u'islnk': False, u'woth': False, u'nlink': 1, u'issock': False, u'mtime': 1558712532.7144387, u'gr_name': u'postgres', u'path': u'/etc/postgresql/pg_hba.conf', u'xusr': False, u'atime': 1558712533.1094387, u'inode': 666624, u'isgid': False, u'size': 929, u'isdir': False, u'ctime': 1558712532.7184386, u'roth': False, u'isblk': False, u'xgrp': False, u'isuid': False, u'dev': 2051, u'wgrp': False, u'isreg': True, u'isfifo': False, u'mode': u'0640', u'pw_name': u'postgres', u'gid': 120, u'ischr': False, u'wusr': True}) changed: [localhost] => (item={u'rusr': True, u'uid': 120, u'rgrp': False, u'xoth': False, u'islnk': False, u'woth': False, u'nlink': 1, u'issock': False, u'mtime': 1558712532.6994386, u'gr_name': u'postgres', u'path': u'/etc/postgresql/postgresql.conf', u'xusr': False, u'atime': 1558712533.0944386, u'inode': 666618, u'isgid': False, u'size': 20195, u'isdir': False, u'ctime': 1558712532.6994386, u'roth': False, u'isblk': False, u'xgrp': False, u'isuid': False, u'dev': 2051, u'wgrp': False, u'isreg': True, u'isfifo': False, u'mode': u'0600', u'pw_name': u'postgres', u'gid': 120, u'ischr': False, u'wusr': True}) changed: [localhost] => (item={u'rusr': True, u'uid': 120, u'rgrp': True, u'xoth': False, u'islnk': False, u'woth': False, u'nlink': 1, u'issock': False, u'mtime': 1558712532.6864386, u'gr_name': u'postgres', u'path': u'/etc/postgresql/pg_ident.conf', u'xusr': False, u'atime': 1558712533.1094387, u'inode': 666626, u'isgid': False, u'size': 47, u'isdir': False, u'ctime': 1558712532.6904387, u'roth': False, u'isblk': False, u'xgrp': False, u'isuid': False, u'dev': 2051, u'wgrp': False, u'isreg': True, u'isfifo': False, u'mode': u'0640', u'pw_name': u'postgres', u'gid': 120, u'ischr': False, u'wusr': True}) TASK [persist-config : Create a symlink to PXE config files] ******************* changed: [localhost] TASK [persist-config : Check if copying of branding files for mate is required] *** ok: [localhost] TASK [persist-config : Ensure branding config directory exists] **************** changed: [localhost] TASK [persist-config : Check if horizon-region-exclusion.csv file exists] ****** ok: [localhost] TASK [persist-config : Copy horizon-region-exclusions.csv if exists] *********** changed: [localhost] TASK [persist-config : Check if branding tar files exist (there should be only one)] *** ok: [localhost] TASK [persist-config : Copy branding tar files] ******************************** TASK [persist-config : Get grub default kernel] ******************************** changed: [localhost] TASK [persist-config : Add default security feature to kernel parameters] ****** changed: [localhost] => (item=grubby --update-kernel=/boot/vmlinuz-3.10.0-957.1.3.el7.1.tis.x86_64 --args='nopti nospectre_v2') changed: [localhost] => (item=grubby --efi --update-kernel=/boot/vmlinuz-3.10.0-957.1.3.el7.1.tis.x86_64 --args='nopti nospectre_v2') TASK [persist-config : Resize filesystems (default)] *************************** changed: [localhost] => (item=lvextend -L20G /dev/cgts-vg/pgsql-lv) changed: [localhost] => (item=lvextend -L10G /dev/cgts-vg/cgcs-lv) changed: [localhost] => (item=lvextend -L16G /dev/cgts-vg/dockerdistribution-lv) changed: [localhost] => (item=lvextend -L40G /dev/cgts-vg/backup-lv) changed: [localhost] => (item=drbdadm -- --assume-peer-has-space resize all) changed: [localhost] => (item=resize2fs /dev/drbd0) changed: [localhost] => (item=resize2fs /dev/drbd3) changed: [localhost] => (item=resize2fs /dev/drbd8) TASK [persist-config : Further resize if root disk size is larger than 240G] *** changed: [localhost] => (item=lvextend -L40G /dev/cgts-vg/pgsql-lv) changed: [localhost] => (item=lvextend -L20G /dev/cgts-vg/cgcs-lv) changed: [localhost] => (item=lvextend -L50G /dev/cgts-vg/backup-lv) changed: [localhost] => (item=drbdadm -- --assume-peer-has-space resize all) changed: [localhost] => (item=resize2fs /dev/drbd0) changed: [localhost] => (item=resize2fs /dev/drbd3) TASK [persist-config : Set input parameters to populate config script] ********* ok: [localhost] TASK [persist-config : Update input parameters with reconfigure system flag] *** TASK [persist-config : Update input parameters with reconfigure network flag] *** TASK [persist-config : Update input parameters with reconfigure service flag] *** TASK [persist-config : Update input parameters if config from previous play is missing] *** TASK [persist-config : debug] ************************************************** ok: [localhost] => { "script_input": "/opt/platform/config/19.01/bootstrap_config" } TASK [persist-config : Shutdown Maintenance services] ************************** TASK [persist-config : Shutdown FM services] *********************************** TASK [persist-config : Shut down and remove Kubernetes components] ************* TASK [persist-config : Clear etcd data cache] ********************************** TASK [persist-config : Restart etcd] ******************************************* TASK [persist-config : Set facts derived from previous network configurations] *** TASK [persist-config : Set facts derived from previous floating addresses] ***** TASK [persist-config : Set facts for the removal of addresses assigned to loopback interface] *** TASK [persist-config : Remove loopback interface in sysinv db and associated addresses] *** TASK [persist-config : Remove the .config_applied flag from previous run before reconfiguring service endpoints] *** TASK [persist-config : Add the new management address for service endpoints reconfiguration] *** TASK [persist-config : Saving config in sysinv database] *********************** changed: [localhost] TASK [persist-config : debug] ************************************************** ok: [localhost] => { "populate_result": { "changed": true, "failed": false, "failed_when_result": false, "rc": 0, "stderr": "", "stderr_lines": [], "stdout": "Populating system config...\nPopulating load config...\nPopulating management network...\nPopulating pxeboot network...\nPopulating oam network...\nPopulating multicast network...\nPopulating cluster host network...\nPopulating cluster pod network...\nPopulating cluster service network...\nNetwork config completed.\nPopulating DNS config...\nPopulating docker config...\nDocker registry config completed.\nManagement mac = 00:00:00:00:00:00\nRoot fs device = /dev/disk/by-path/pci-0000:00:14.0-ata-3.0\nBoot device = /dev/disk/by-path/pci-0000:00:14.0-ata-3.0\nConsole = tty0\nTboot = false\nInstall output = text\nHost values = {'tboot': 'false', 'install_output': 'text', 'rootfs_device': '/dev/disk/by-path/pci-0000:00:14.0-ata-3.0', 'boot_device': '/dev/disk/by-path/pci-0000:00:14.0-ata-3.0', 'availability': 'offline', 'mgmt_mac': '00:00:00:00:00:00', 'console': 'tty0', 'mgmt_ip': '10.10.62.3', 'hostname': 'controller-0', 'operational': 'disabled', 'invprovision': 'provisioned', 'administrative': 'locked', 'personality': 'controller'}\nPopulating ceph-mon config for controller-0...\nPopulating ceph storage backend config...\nSuccessfully updated the initial system config.\n", "stdout_lines": [ "Populating system config...", "Populating load config...", "Populating management network...", "Populating pxeboot network...", "Populating oam network...", "Populating multicast network...", "Populating cluster host network...", "Populating cluster pod network...", "Populating cluster service network...", "Network config completed.", "Populating DNS config...", "Populating docker config...", "Docker registry config completed.", "Management mac = 00:00:00:00:00:00", "Root fs device = /dev/disk/by-path/pci-0000:00:14.0-ata-3.0", "Boot device = /dev/disk/by-path/pci-0000:00:14.0-ata-3.0", "Console = tty0", "Tboot = false", "Install output = text", "Host values = {'tboot': 'false', 'install_output': 'text', 'rootfs_device': '/dev/disk/by-path/pci-0000:00:14.0-ata-3.0', 'boot_device': '/dev/disk/by-path/pci-0000:00:14.0-ata-3.0', 'availability': 'offline', 'mgmt_mac': '00:00:00:00:00:00', 'console': 'tty0', 'mgmt_ip': '10.10.62.3', 'hostname': 'controller-0', 'operational': 'disabled', 'invprovision': 'provisioned', 'administrative': 'locked', 'personality': 'controller'}", "Populating ceph-mon config for controller-0...", "Populating ceph storage backend config...", "Successfully updated the initial system config." ] } } TASK [persist-config : Fail if populate config script throws an exception] ***** TASK [persist-config : Wait for service endpoints reconfiguration to complete] *** TASK [persist-config : Update sysinv API bind host with new management floating IP] *** TASK [persist-config : Restart sysinv-agent and sysinv-api to pick up sysinv.conf update] *** TASK [persist-config : Ensure docker config directory exists] ****************** TASK [persist-config : Ensure docker proxy config exists] ********************** TASK [persist-config : Write header to docker proxy conf file] ***************** TASK [persist-config : Add http proxy URL to docker proxy conf file] *********** TASK [persist-config : Add https proxy URL to docker proxy conf file] ********** TASK [persist-config : Add no proxy address list to docker proxy config file] *** TASK [persist-config : Restart Docker] ***************************************** TASK [persist-config : Set pxeboot files source if address allocation is dynamic] *** ok: [localhost] TASK [persist-config : Set pxeboot files source if address allocation is static] *** TASK [persist-config : Set pxeboot files symlinks] ***************************** changed: [localhost] => (item={u'dest': u'pxelinux.cfg/default', u'src': u'pxelinux.cfg.files/default'}) changed: [localhost] => (item={u'dest': u'pxelinux.cfg/grub.cfg', u'src': u'pxelinux.cfg.files/grub.cfg'}) TASK [persist-config : Update the management_interface in platform.conf] ******* changed: [localhost] TASK [persist-config : Add new entries to platform.conf] *********************** changed: [localhost] => (item=region_config=no) changed: [localhost] => (item=system_mode=simplex) changed: [localhost] => (item=sw_version=19.01) changed: [localhost] => (item=vswitch_type=none) TASK [persist-config : Update resolv.conf with list of dns servers] ************ changed: [localhost] => (item=192.168.90.60) TASK [persist-config : Remove localhost address from resolv.conf] ************** ok: [localhost] TASK [bringup-essential-services : Add loopback interface] ********************* changed: [localhost] => (item=source /etc/platform/openrc; system host-if-add controller-0 lo virtual none lo -c platform --networks mgmt -m 1500) changed: [localhost] => (item=source /etc/platform/openrc; system host-if-modify controller-0 -c platform --networks cluster-host lo) changed: [localhost] => (item=ip addr add 192.168.206.3/24 brd 192.168.206.255 dev lo scope host label lo:5) changed: [localhost] => (item=ip addr add 10.10.62.3/24 brd 10.10.62.255 dev lo scope host label lo:1) changed: [localhost] => (item=ip addr add 169.254.202.2/24 dev lo scope host) changed: [localhost] => (item=ip addr add 192.168.206.2/24 dev lo scope host) changed: [localhost] => (item=ip addr add 10.10.62.5/24 dev lo scope host) changed: [localhost] => (item=ip addr add 10.10.62.6/24 dev lo scope host) TASK [bringup-essential-services : Add management floating adddress if this is the initial play] *** changed: [localhost] TASK [bringup-essential-services : Remove previous management floating address if management network config has changed] *** TASK [bringup-essential-services : Remove existing /etc/hosts] ***************** changed: [localhost] TASK [bringup-essential-services : Populate /etc/hosts] ************************ changed: [localhost] => (item=127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4) changed: [localhost] => (item=10.10.62.2 controller) changed: [localhost] => (item=192.168.206.3 controller-0-infra) changed: [localhost] => (item=169.254.202.2 pxecontroller) changed: [localhost] => (item=192.168.90.240 oamcontroller) changed: [localhost] => (item=10.10.62.5 controller-platform-nfs) changed: [localhost] => (item=10.10.62.4 controller-1) changed: [localhost] => (item=10.10.62.3 controller-0) changed: [localhost] => (item=192.168.206.4 controller-1-infra) changed: [localhost] => (item=10.10.62.6 controller-nfs) TASK [bringup-essential-services : Save hosts file to permanent location] ****** changed: [localhost] TASK [bringup-essential-services : Update name service caching server] ********* changed: [localhost] TASK [bringup-essential-services : Set default directory for image files copy] *** TASK [bringup-essential-services : Copy Docker images to remote host] ********** TASK [bringup-essential-services : Adjust the images directory fact for local host] *** TASK [bringup-essential-services : Get list of archived files] ***************** TASK [bringup-essential-services : Load system images] ************************* TASK [bringup-essential-services : Setup iptables for Kubernetes] ************** changed: [localhost] => (item=net.bridge.bridge-nf-call-ip6tables = 1) changed: [localhost] => (item=net.bridge.bridge-nf-call-iptables = 1) TASK [bringup-essential-services : Create daemon.json file for insecure registry] *** TASK [bringup-essential-services : Update daemon.json with registry IP] ******** TASK [bringup-essential-services : Restart docker] ***************************** TASK [bringup-essential-services : Update kernel parameters for iptables] ****** changed: [localhost] TASK [bringup-essential-services : Create manifests directory required by kubelet] *** changed: [localhost] TASK [bringup-essential-services : Create kubelet cgroup for minimal set] ****** changed: [localhost] => (item=cpuset) changed: [localhost] => (item=cpu) changed: [localhost] => (item=cpuacct) changed: [localhost] => (item=memory) changed: [localhost] => (item=systemd) TASK [bringup-essential-services : Get default k8s cpuset] ********************* changed: [localhost] TASK [bringup-essential-services : Get default k8s nodeset] ******************** changed: [localhost] TASK [bringup-essential-services : Set mems for cpuset controller] ************* changed: [localhost] TASK [bringup-essential-services : Set cpus for cpuset controller] ************* changed: [localhost] TASK [bringup-essential-services : Create a tasks file for cpuset controller] *** changed: [localhost] TASK [bringup-essential-services : Enable kubelet] ***************************** changed: [localhost] TASK [bringup-essential-services : Create Kube admin yaml] ********************* changed: [localhost] TASK [bringup-essential-services : Update Kube admin yaml with network info] *** changed: [localhost] => (item=sed -i -e 's|<%= @apiserver_advertise_address %>|'$CLUSTER_IP'|g' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e 's|<%= @etcd_endpoint %>|'http://"$CLUSTER_IP":$ETCD_PORT'|g' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e 's|<%= @service_domain %>|'cluster.local'|g' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e 's|<%= @pod_network_cidr %>|'$POD_NETWORK_CIDR'|g' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e 's|<%= @service_network_cidr %>|'$SERVICE_NETWORK_CIDR'|g' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i '/<%- /d' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e 's|<%= @k8s_registry %>|'$K8S_REGISTRY'|g' /etc/kubernetes/kubeadm.yaml) TASK [bringup-essential-services : Update image repo in admin yaml if unified registry is used] *** [WARNING]: Module remote_tmp /tmp/.ansible-root/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually changed: [localhost] TASK [bringup-essential-services : Initializing Kubernetes master] ************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--config=/etc/kubernetes/kubeadm.yaml"], "delta": "0:00:02.723002", "end": "2019-05-24 15:46:44.584079", "msg": "non-zero return code", "rc": 1, "start": "2019-05-24 15:46:41.861077", "stderr": "error execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-proxy:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 192.168.90.60/pause:3.1: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 192.168.90.60/coredns:1.2.6: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stderr_lines": ["error execution phase preflight: [preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 192.168.90.60/kube-proxy:v1.13.5: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 192.168.90.60/pause:3.1: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 192.168.90.60/coredns:1.2.6: output: Error response from daemon: Get https://192.168.90.60/v2/: x509: cannot validate certificate for 192.168.90.60 because it doesn't contain any IP SANs", ", error: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.13.5\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] Using Kubernetes version: v1.13.5", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"]} PLAY RECAP *********************************************************************