Minor update workflow doesn't work for custom roles

Bug #1674767 reported by Steven Hardy
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
Expired
Undecided
Unassigned

Bug Description

https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/tasks/yum_update.sh#L73-L89

We have an assumption that any role not running pacemaker isn't updated during the minor update workflow, which means it's impossible to roll out minor updates to custom roles, I think we need to revisit this so minor updates can be applied to customized environments.

It's unclear what "Upgrading other packages is handled by config management tooling" means, but perhaps the intention is to handle the upgrades via puppet?

Either way we should clarify the best approach and either adjust the script or document how this should work with custom roles.

Tags: upgrade
Steven Hardy (shardy)
Changed in tripleo:
milestone: none → pike-1
status: New → Triaged
importance: Undecided → High
Revision history for this message
Marios Andreou (marios-b) wrote :

@shardy wrt "Upgrading other packages is handled by config management tooling" yeah it means puppet with the tripleo-packages class here https://github.com/openstack/puppet-tripleo/blob/master/manifests/packages.pp#L54 https://github.com/openstack/puppet-tripleo/blob/master/manifests/packages/upgrades.pp#L38

Revision history for this message
Steven Hardy (shardy) wrote :

@marios: ack, so do we expect folks to set EnablePackageInstall: true for all deployments enabling custom roles, or should we perhaps add a boolean to roles_data which enables the normal yum update flow to work?

I suspect we should at least optionally enable the latter, because it's closer to the old behavior when you just move some services which were once on the controller onto other roles?

Things may not be that simple when considering composable HA though, as I'm not sure how we handle a rolling update of a cluster of nodes where pacemaker remote is running (e.g do we need to somehow ensure the Controller role gets updated first to put the cluster into maintenance mode?

Changed in tripleo:
milestone: pike-1 → pike-2
Changed in tripleo:
milestone: pike-2 → pike-3
tags: added: upgrade
Changed in tripleo:
milestone: pike-3 → pike-rc1
Changed in tripleo:
milestone: pike-rc1 → pike-rc2
Changed in tripleo:
milestone: pike-rc2 → queens-1
Changed in tripleo:
milestone: queens-1 → queens-2
Changed in tripleo:
milestone: queens-2 → queens-3
Changed in tripleo:
milestone: queens-3 → queens-rc1
Changed in tripleo:
milestone: queens-rc1 → rocky-1
Changed in tripleo:
milestone: rocky-1 → rocky-2
Changed in tripleo:
milestone: rocky-2 → rocky-3
Changed in tripleo:
milestone: rocky-3 → rocky-rc1
Changed in tripleo:
milestone: rocky-rc1 → stein-1
Changed in tripleo:
milestone: stein-1 → stein-2
Revision history for this message
Emilien Macchi (emilienm) wrote : Cleanup EOL bug report

This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: <RELEASE_NAME>"
  Only still supported release names are valid (FUTURE, PIKE, QUEENS, ROCKY, STEIN).
  Valid example: CONFIRMED FOR: FUTURE

Changed in tripleo:
importance: High → Undecided
status: Triaged → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.