[octane] Control-plane upgrading fails when fuel doesn't have cached service tenant
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
Medium
|
Anastasia Balobashina |
Bug Description
Detailed bug description:
During upgrade MOS6.0 to MOS8.0 via Octane(stable/8.0), I have faced with following bug.
By doing controller-plane upgrade via Octane (octane upgrade-control <orig_env_id> <seed_env_id>), it failed with an error 503. The error has been recieved from original environment.
Steps to reproduce:
1) Have MOS6.0 environment
2) Upgrade master node (fuel 6.0) to 6.1 and then to 7.0
3) Backup environment settings, following documented process
4) Prepare new master node (fuel 8.0)
5) Restore the backup from third step
5) Upgrade restored environment via Octane
6) Upgrade primary controller via Octane
7) Upgrade DB via Octane
8) Upgrade Control Plane via Octane
Expected results:
Control-plane upgrade is successfully done
Actual result:
Octane tries to get data from original environment and fails.
Reproducibility:
Clean /tmp directory at fuel before control-plane upgrade
Workaround:
Turn off maintenance mode for MySQL in HAproxy on original environment.
Description of the environment:
Versions of components: MOS8.0, Octane(stable/8.0)
Network model: Environment uses network bonding
Related projects installed: Octane
Additional information:
It happens in octane/
In function "update_
tenant_id = env_util.
Function cache_service_
Changed in fuel: | |
milestone: | none → 8.0-updates |
assignee: | nobody → Fuel Octane (fuel-octane-team) |
importance: | Undecided → High |
status: | New → Confirmed |
tags: | added: team-upgrades |
Changed in fuel: | |
importance: | High → Medium |
summary: |
- [octane] Control-plane upgradiung fails when fuel has not cached service - tenant + [octane] Control-plane upgrading fails when fuel doesn't have cached + service tenant |
Changed in fuel: | |
assignee: | Fuel Octane (fuel-octane-team) → Anastasiya (atolochkova) |
I believe, that should not be done at that point. We can try to devise some alternative method to get the service tenant ID, I suppose. For example, query the cluster's database directly.