FYI, I'm attempting as suggested to use Nautilus via Eoan.  I've learned if you have IP6 enabled in disco's ceph.conf none of the osds will load in eoan / nautilus until you add ms_bind_ipv4 = false to ceph.conf.  Also the dashboard remains broken in eoan / ceph nautilus at least as far as the simple 'do-release-upgrade --devel' provides.  I wonder if the dashboard really was tested before the announced 'fix released' was posted for eoan.   I don't know all of the causes for the dashboard being broken but one of them is systemd appears to create manager services for the hostname and for the hostname.domainname.com (or whatever). so even "ceph mgr module enable dashboard --force" fails to create a manager with a working dashboard instance. Here we see a little example of why our linux world faces problems in acceptance.  It's one thing for a release to offer a new feature that's somewhat broken.  It's a whole other thing for a major user-facing feature (dashboard) of an enterprise/core system (fail-tolerant storage) next release to obviously never have been tested beforehand and ship broken.   You want to trust that doesn't happen and not be nervous when doing release upgrades. You can understand how that could happen in an entirely community supported distro but I've seen it in both RHEL (viz: freeipa) and Ubuntu/ceph. I appreciate the suggested  'solution' to move to the next version development set to be released in 4 months.  But then that not only doesn't restore the desired module but brings the whole cluster offline until a non-documented flag gets set (ms_bind_ipv4 isn't documented that I could find, ms_bind_ipv6 is.) I'm sharing this experience not to complain as such but for information.  Ubuntu ships with so many notifications about available upgrades of security and other sorts every log in one feels they must be ready for prime time or Canonical wouldn't have pushed them out.  Then a big stopper like this happens. On 7/12/19 8:33 AM, James Page wrote: > Sorry wrong PPA: > > https://launchpad.net/~ci-train-ppa- > service/+archive/ubuntu/3534/+packages > > ** Description changed: > > - If Ubuntu is really committed to ceph as I think I've been reading: > - Notice the ceph dashboard went entirely broken in a major regression of > - the disco upgrade. It won't load at all in 13.2.4+dfsg1-0ubuntu2. > + [Impact] > + The ceph-mgr daemon is unable to load additional module due to a new check in cython >= 0.29. This limits the function of the manager. > + > + > + [Test Case] > + Deploy ceph > + Check /var/log/ceph/ceph-mgr.`hostname`.log > + Errors about loading rados module in subprocesses will be seen. > + > + [Regression Potential] > + The fix from upstream actually just works around this issue by overriding the check that cython does; the code works in a subprocess when loaded multiple times. Regression potential low; cython may produce a longer term fix which means we can drop this patch. > + > + [Original Bug Report] > + If Ubuntu is really committed to ceph as I think I've been reading: Notice the ceph dashboard went entirely broken in a major regression of the disco upgrade. It won't load at all in 13.2.4+dfsg1-0ubuntu2. > > The detail is ceph-mgr (and lots of ceph) relied on a non-feature in > cython that went away in cython v29, to do with sub-interpreters. The > ceph folks responded with a hack/workaround to avoid the bug being > noticed, and a requirement of the package for an earlier version of > cython. This was done some weeks and months ago. Actually fixing the > problem is a major project the ceph maintainers are struggling to > engage, perhaps waiting for later versions of cython to provide a > different way forward. > > However, as of today, on disco ths error message remains: > > Module 'dashboard' has failed dependency: Interpreter change detected - > this module can only be loaded into one interpreter per process. > > The ceph primary development platform is Debian, on which the workaround > has been available for some time. > > However in our ubuntu case, a major feature of a core packge (web > health/monitoring/config interface of a distributed file system), was > allowed to both ship broken and remain so for a long time, even through > today. > > I urge quick attention to the necessary backports. > https://github.com/ceph/ceph/pull/25585 > http://tracker.ceph.com/issues/38788 > http://tracker.ceph.com/issues/37472 >