Error status of cluster after deployment fails and reinstall

Bug #1486998 reported by Veronica Krayneva
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Invalid
High
Fuel Python (Deprecated)

Bug Description

Scenario:
1. Create a cluster
2. Add 3 nodes with controller roles
3. Add a node with compute and cinder roles
4. Add a node with mongo role
5. Provision the primary controller node
6. For the primary controller node put inappropriate task to be executed to cause a failure on deployment.(example, fuel node --node <id> --tasks hiera) And start deployment of the node (fuel node --node-id <> deploy)
7. Wait until deployment fails
8. Reinstall the cluster
9. Run network verification
10. Run OSTF
11. Verify that Ceilometer API service is up and running
12. Check status of cluster

Expected result:
Status of cluster should be not error after reinstall

Actual result:
On 12 step -
[root@nailgun ~]# fuel env
id | status | name | mode | release_id | pending_release_id
---|--------|-----------------------|------------|------------|-------------------
1 | error | NodeReinstallationEnv | ha_compact | 2 | None

VERSION:
  feature_groups:
    - mirantis
  production: "docker"
  release: "7.0"
  openstack_version: "2015.1.0-7.0"
  api: "1.0"
  build_number: "187"
  build_id: "2015-08-18_03-05-20"
  nailgun_sha: "4710801a2f4a6d61d652f8f1e64215d9dde37d2e"
  python-fuelclient_sha: "4c74a60aa60c06c136d9197c7d09fa4f8c8e2863"
  fuel-agent_sha: "57145b1d8804389304cd04322ba0fb3dc9d30327"
  fuel-nailgun-agent_sha: "e01693992d7a0304d926b922b43f3b747c35964c"
  astute_sha: "e24ca066bf6160bc1e419aaa5d486cad1aaa937d"
  fuel-library_sha: "0062e69db17f8a63f85996039bdefa87aea498e1"
  fuel-ostf_sha: "17786b86b78e5b66d2b1c15500186648df10c63d"
  fuelmain_sha: "c9dad194e82a60bf33060eae635fff867116a9ce"

Revision history for this message
Veronica Krayneva (vkrayneva) wrote :
Changed in fuel:
importance: Undecided → High
milestone: none → 7.0
summary: - Error status of cluster after deployment fails and reistall
+ Error status of cluster after deployment fails and reinstall
Changed in fuel:
assignee: nobody → Fuel Python Team (fuel-python)
Revision history for this message
Ihor Kalnytskyi (ikalnytskyi) wrote :

@Veronica,

Could you please clarify what do you mean by "Reinstall the cluster"? Do you mean our new reinstallation feature that were introduced in 7.0?

Revision history for this message
Veronica Krayneva (vkrayneva) wrote :

Yes, it is new feature.

Revision history for this message
Ihor Kalnytskyi (ikalnytskyi) wrote :

AFAIU, the feature is relies on two consequent commands:

  $ fuel node --node-id 2 --provision
  $ fuel node --node-id 2 --deploy

These commands are intended to be used for manual hacking, and they have nothing to do with cluster status. They just run either provision or deployment, the cluster status will be the same. And that makes sense, since you can remove some tasks from deployment execution passing additional arguments to --deploy, so assuming that after that cluster is operations will be incorrect.

Changed in fuel:
status: New → Confirmed
Revision history for this message
Ihor Kalnytskyi (ikalnytskyi) wrote :

Move it to Invalid, since this is our usual workflow to manual `--provision` & `--deploy` commands - we don't manage cluster status if these were used. There could be a lot of discussion how we should manage them if we want to, and this must be done as blueprint with spec, not as bug.

Changed in fuel:
status: Confirmed → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.