Activity log for bug #1584656

Date Who What changed Old value New value Message
2016-05-23 08:53:34 Andrey Volochay bug added bug
2016-05-23 09:54:43 Andrey Volochay description Detailed bug description: During upgrade MOS6.0 to MOS8.0 via Octane(stable/8.0), I have faced with following bug. By doing second and third controllers upgrade via Octane (octane upgrade-node <target_node_id> <seed_env_id>), deployment failed at "installing openstack" part. Could not evaluate: Primitive 'p_dns' was not found in CIB! Steps to reproduce: 1) Have MOS6.0 environment 2) Upgrade master node (fuel 6.0) to 6.1 and then to 7.0 3) Backup environment settings, following documented process 4) Prepare new master node (fuel 8.0) 5) Restore the backup from third step 5) Upgrade restored environment via Octane 6) Upgrade primary controller via Octane 7) Upgrade DB via Octane 8) Upgrade Control Plane via Octane 9) Upgrade second and third controllers Expected results: Upgrade of second and third controllers is successfully done Actual result: Puppet apply returns an error which in turn breaks node upgrade. Workaround: Add records about second and third controllers in corosync.conf at primary-controller before their upgrade. Description of the environment: Versions of components: MOS8.0, Octane(stable/8.0) Network model: Environment uses network bonding Related projects installed: Octane Detailed bug description:  During upgrade MOS6.0 to MOS8.0 via Octane(stable/8.0), I have faced with following bug.  By doing second and third controllers upgrade via Octane (octane upgrade-node <target_node_id> <seed_env_id>), deployment failed at "installing openstack" part.  Could not evaluate: Primitive 'p_dns' was not found in CIB! Steps to reproduce:  1) Have MOS6.0 environment  2) Upgrade master node (fuel 6.0) to 6.1 and then to 7.0  3) Backup environment settings, following documented process  4) Prepare new master node (fuel 8.0)  5) Restore the backup from third step  5) Upgrade restored environment via Octane  6) Upgrade primary controller via Octane  7) Upgrade DB via Octane  8) Upgrade Control Plane via Octane  9) Upgrade second and third controllers Expected results:  Upgrade of second and third controllers is successfully done Actual result:  Puppet apply returns an error which in turn breaks node upgrade. Reproducibility: When you upgrade controllers one by one, but not the both at one time. Workaround:  Add records about second and third controllers in corosync.conf at primary-controller before their upgrade. Description of the environment:  Versions of components: MOS8.0, Octane(stable/8.0)  Network model: Environment uses network bonding  Related projects installed: Octane Additional information: it happens in octane/commands/upgrade_node.py if isolated or len(nodes) == 1: env_util.deploy_nodes(env, nodes) else: env_util.deploy_changes(env, nodes) It's unsafe to manage controllers upgrade via count of nodes that are upgraded. Node's role should be the criterion.
2016-05-23 10:07:21 Andrey Volochay description Detailed bug description:  During upgrade MOS6.0 to MOS8.0 via Octane(stable/8.0), I have faced with following bug.  By doing second and third controllers upgrade via Octane (octane upgrade-node <target_node_id> <seed_env_id>), deployment failed at "installing openstack" part.  Could not evaluate: Primitive 'p_dns' was not found in CIB! Steps to reproduce:  1) Have MOS6.0 environment  2) Upgrade master node (fuel 6.0) to 6.1 and then to 7.0  3) Backup environment settings, following documented process  4) Prepare new master node (fuel 8.0)  5) Restore the backup from third step  5) Upgrade restored environment via Octane  6) Upgrade primary controller via Octane  7) Upgrade DB via Octane  8) Upgrade Control Plane via Octane  9) Upgrade second and third controllers Expected results:  Upgrade of second and third controllers is successfully done Actual result:  Puppet apply returns an error which in turn breaks node upgrade. Reproducibility: When you upgrade controllers one by one, but not the both at one time. Workaround:  Add records about second and third controllers in corosync.conf at primary-controller before their upgrade. Description of the environment:  Versions of components: MOS8.0, Octane(stable/8.0)  Network model: Environment uses network bonding  Related projects installed: Octane Additional information: it happens in octane/commands/upgrade_node.py if isolated or len(nodes) == 1: env_util.deploy_nodes(env, nodes) else: env_util.deploy_changes(env, nodes) It's unsafe to manage controllers upgrade via count of nodes that are upgraded. Node's role should be the criterion. Detailed bug description:  During upgrade MOS6.0 to MOS8.0 via Octane(stable/8.0), I have faced with following bug.  By doing second and third controllers upgrade via Octane (octane upgrade-node <target_node_id> <seed_env_id>), deployment failed at "installing openstack" part.  Could not evaluate: Primitive 'p_dns' was not found in CIB! Steps to reproduce:  1) Have MOS6.0 environment  2) Upgrade master node (fuel 6.0) to 6.1 and then to 7.0  3) Backup environment settings, following documented process  4) Prepare new master node (fuel 8.0)  5) Restore the backup from third step  5) Upgrade restored environment via Octane  6) Upgrade primary controller via Octane  7) Upgrade DB via Octane  8) Upgrade Control Plane via Octane  9) Upgrade second controller 10) Upgrade third controller Expected results:  Upgrade of second and third controllers is successfully done Actual result:  Puppet apply returns an error which in turn breaks node upgrade. Reproducibility:  When you upgrade controllers one by one, but not the both at one time. Workaround:  Add records about second and third controllers in corosync.conf at primary-controller before their upgrade. Description of the environment:  Versions of components: MOS8.0, Octane(stable/8.0)  Network model: Environment uses network bonding  Related projects installed: Octane Additional information:  it happens in octane/commands/upgrade_node.py  if isolated or len(nodes) == 1:      env_util.deploy_nodes(env, nodes)  else:      env_util.deploy_changes(env, nodes)  It's unsafe to manage controllers upgrade via count of nodes that are upgraded. Node's role should be the criterion.
2016-05-23 11:49:05 Oleksiy Molchanov fuel: milestone 8.0-updates
2016-05-23 11:49:22 Oleksiy Molchanov fuel: assignee Fuel Octane (fuel-octane-team)
2016-05-23 11:49:24 Oleksiy Molchanov fuel: importance Undecided High
2016-05-23 11:49:28 Oleksiy Molchanov fuel: status New Confirmed
2016-05-23 11:49:37 Oleksiy Molchanov tags team-upgrades
2016-05-25 15:39:22 Oleg S. Gelbukh fuel: importance High Medium
2016-05-26 14:59:49 OpenStack Infra fuel: status Confirmed In Progress
2016-05-26 14:59:49 OpenStack Infra fuel: assignee Fuel Octane (fuel-octane-team) Oleg S. Gelbukh (gelbuhos)
2016-05-31 11:12:26 Oleg S. Gelbukh summary [octane] Upgrade second and third controllers fails at "installing openstack" part [octane] Upgrade second and third controllers separately fails at "installing openstack" part
2016-06-22 08:25:09 OpenStack Infra fuel: status In Progress Fix Committed
2016-06-22 18:49:48 OpenStack Infra tags team-upgrades in-stable-mitaka team-upgrades