add-disk action results in failure to import clear_unit_upgrading
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Invalid
|
Low
|
Unassigned |
Bug Description
Charm cs:ceph-osd-275, xenial.
I tried to add some disks after needing to zap-disk them.
Running the action failed immediately, with the following traceback in the unit log:
2019-02-07 04:54:37 DEBUG juju.worker.uniter agent.go:17 [AGENT-STATUS] executing: running action add-disk
2019-02-07 04:54:37 DEBUG add-disk Traceback (most recent call last):
2019-02-07 04:54:37 DEBUG add-disk File "/var/lib/
2019-02-07 04:54:37 DEBUG add-disk import ceph_hooks
2019-02-07 04:54:37 DEBUG add-disk File "hooks/
2019-02-07 04:54:37 DEBUG add-disk from charmhelpers.
2019-02-07 04:54:37 DEBUG add-disk ImportError: cannot import name 'clear_
2019-02-07 04:54:37 DEBUG juju.worker.
2019-02-07 04:54:37 DEBUG juju.machinelock machinelock.go:180 machine lock released for uniter (run action 3df6a36c-
I can see from the commit for this charm revision:
$ git status
HEAD detached at e369a09
nothing to commit, working tree clean
$ fgrep -r clear_unit_ upgrading * _pycache_ _/ceph_ hooks.cpython- 36.pyc matches hooks.py: clear_unit_ upgrading, hooks.py: clear_unit_ upgrading( ) ers/contrib/ openstack/ utils.py: def clear_unit_ upgrading( ): ers/contrib/ openstack/ utils.py: clear_unit_ upgrading( )
Binary file hooks/_
hooks/ceph_
hooks/ceph_
hooks/charmhelp
hooks/charmhelp
that the in-tree version of charm-helpers does have the required function.
Is it possible that you have a reactive charm with an older charmhelpers revision that's not using a virtualenv (which is the default in later reactive/layered build versions).