I was able to reproduce this issue in my lab by luck and my workaround is following:
1) For example, if the message is "Waiting on juju-4c2163-3-lxd-3 to finish upgrading", then reboot that particular lxd or ceph-mon/<node-number>
# in my case, this particular node is not a leader, but it is a leader before upgrade
$ juju ssh ceph-mon/<node-number>
$ reboot
2) After the reboot, the juju status will show failure on that node, but I found it had recovered by itself in juju logs, hence resolve the ceph-mon node
I was able to reproduce this issue in my lab by luck and my workaround is following:
1) For example, if the message is "Waiting on juju-4c2163-3-lxd-3 to finish upgrading", then reboot that particular lxd or ceph-mon/ <node-number>
# in my case, this particular node is not a leader, but it is a leader before upgrade <node-number>
$ juju ssh ceph-mon/
$ reboot
2) After the reboot, the juju status will show failure on that node, but I found it had recovered by itself in juju logs, hence resolve the ceph-mon node
$ juju resolve ceph-mon/ <node-number>
Then all ceph-mon nodes are ready and clustered.