Ceph dashboard isn't working

Bug #1765808 reported by Sabbir Sakib
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla-ansible
Invalid
Undecided
Ravinder Kumar

Bug Description

Hello,
      I've deployed OpenStack with Ceph as a backend storage successfully but for some reason, I can't access ceph dashboard.

I tried with enabling the module but no luck.

[root@oscontroller02 ~]# docker exec -it --user root ceph_mon /bin/bash
(ceph-mon)[root@oscontroller02 /]# ceph mgr module ls
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Do I need to make any changes in IpTable roles?

Changed in kolla-ansible:
assignee: nobody → Ravinder Kumar (rhcayadav)
Revision history for this message
Ravinder Kumar (rhcayadav) wrote :

@Sabbir Sakib (sakibsys) Now with queens image (with centos-binary verified ) , it is working fine. You need to just enable dashboard module. And it serves on 7000 port. make docker exec from ceph_mon container.

[root@controller ~]# docker exec -it ceph_mon bash
(ceph-mon)[root@controller /]# ceph mgr module enable dashboard
(ceph-mon)[root@controller /]# ceph mgr module ls
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}

(ceph-mon)[root@controller /]# ceph mgr services
{
    "dashboard": "http://controller.example.com:7000/"
}

please check services. I have checked ceph portal on port 7000 , working good.

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

@rhcayadav I have enabled the dashboard but I can't telnet to port 7000 and also, getting "This site can’t be reached" from the browser.

Do I need to enable/install the dashboard in globals.yml file ?

(ceph-mon)[root@controller01 /]# ceph mgr module ls
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}
(ceph-mon)[root@oscontroller01 /]# ceph mgr services
{
    "dashboard": "http://controller01.example.pvt:7000/"
}

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Something like below adding in globals.yml

enable_ceph_dashboard: "{{ enable_ceph | bool }}"

Revision history for this message
Ravinder Kumar (rhcayadav) wrote : Re: [Bug 1765808] Re: Ceph dashboard isn't working

please try latest queens image , i have verified that image , but need to
enable dashboard. i looking to give a patch to enable ceph dashboard. it is
only in global.yaml nothing in code still. pleaee tell me your setup
specifications version etc

On Jun 5, 2018 10:41 PM, "Sabbir Sakib" <email address hidden> wrote:

> Something like below adding in globals.yml
>
> enable_ceph_dashboard: "{{ enable_ceph | bool }}"
>
> --
> You received this bug notification because you are a bug assignee.
> https://bugs.launchpad.net/bugs/1765808
>
> Title:
> Ceph dashboard isn't working
>
> Status in kolla-ansible:
> New
>
> Bug description:
> Hello,
> I've deployed OpenStack with Ceph as a backend storage
> successfully but for some reason, I can't access ceph dashboard.
>
>
> I tried with enabling the module but no luck.
>
> [root@oscontroller02 ~]# docker exec -it --user root ceph_mon /bin/bash
> (ceph-mon)[root@oscontroller02 /]# ceph mgr module ls
> {
> "enabled_modules": [
> "balancer",
> "dashboard",
> "restful",
> "status"
> ],
> "disabled_modules": [
> "influx",
> "localpool",
> "prometheus",
> "selftest",
> "zabbix"
> ]
> }
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/kolla-ansible/+bug/1765808/+subscriptions
>

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

I'm using kolla-ansible v6 with latest queens docker container. Can you please share what are the changes you made or the PR link. I'll double check my settings manually.

[root@osdeploy01 kolla]# pip list | grep -i kolla-ansible
kolla-ansible 6.0.0

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Here is the globals.yml file.

config_strategy: "COPY_ALWAYS"
kolla_base_distro: "centos"
kolla_install_type: "binary"
openstack_release: "queens"
node_custom_config: "/etc/kolla/config"
openstack_region_name: "RegionOne"
multiple_regions_names:
    - "{{ openstack_region_name }}"
    - "RegionTwo"
kolla_internal_vip_address: "192.168.1.100"
kolla_internal_fqdn: "{{ kolla_internal_vip_address }}"
kolla_external_vip_address: "{{ kolla_internal_vip_address }}"
kolla_external_fqdn: "{{ kolla_external_vip_address }}"
network_interface: "bond0"
api_interface: "{{ network_interface }}"
storage_interface: "{{ network_interface }}"
cluster_interface: "{{ network_interface }}"
tunnel_interface: "{{ network_interface }}"
dns_interface: "{{ network_interface }}"
neutron_external_interface: "bond1"
neutron_plugin_agent: "openvswitch"
keepalived_virtual_router_id: "51"
openstack_logging_debug: "False"
nova_console: "novnc"
enable_keystone: "yes"
enable_aodh: "yes"
enable_central_logging: "yes"
enable_ceph: "yes"
enable_ceph_mds: "yes"
enable_ceph_rgw: "yes"
enable_cinder: "yes"
enable_collectd: "yes"
enable_fluentd: "yes"
enable_haproxy: "yes"
enable_heat: "yes"
enable_horizon: "yes"
enable_horizon_magnum: "{{ enable_magnum | bool }}"
enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}"
enable_horizon_trove: "{{ enable_trove | bool }}"
enable_magnum: "yes"
enable_neutron_provider_networks: "yes"
enable_neutron_lbaas: "yes"
enable_neutron_agent_ha: "yes"
enable_trove: "yes"
ceph_pool_type: "replicated"
ceph_pool_pg_num: 512
ceph_pool_pgp_num: 512
keystone_token_provider: 'fernet'
glance_backend_file: "no"
glance_backend_ceph: "yes"
cinder_backend_ceph: "{{ enable_ceph }}"
cinder_backup_driver: "ceph"
nova_backend_ceph: "{{ enable_ceph }}"
nova_compute_virt_type: "kvm"
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to kolla-ansible (master)

Fix proposed to branch: master
Review: https://review.openstack.org/573083

Changed in kolla-ansible:
status: New → In Progress
Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Still not working. I guess for multinode with HA, you may need to add/allow the dashboard into haproxy.

Revision history for this message
Ravinder Kumar (rhcayadav) wrote : Re: [Bug 1765808] Re: Ceph dashboard isn't working

ok
i will test it in ha and will update patch if needed.

On Jun 10, 2018 3:20 AM, "Sabbir Sakib" <email address hidden> wrote:

Still not working. I guess for multinode with HA, you may need to
add/allow the dashboard into haproxy.

--
You received this bug notification because you are a bug assignee.
https://bugs.launchpad.net/bugs/1765808

Title:
  Ceph dashboard isn't working

Status in kolla-ansible:
  In Progress

Bug description:
  Hello,
        I've deployed OpenStack with Ceph as a backend storage successfully
but for some reason, I can't access ceph dashboard.

  I tried with enabling the module but no luck.

  [root@oscontroller02 ~]# docker exec -it --user root ceph_mon /bin/bash
  (ceph-mon)[root@oscontroller02 /]# ceph mgr module ls
  {
      "enabled_modules": [
          "balancer",
          "dashboard",
          "restful",
          "status"
      ],
      "disabled_modules": [
          "influx",
          "localpool",
          "prometheus",
          "selftest",
          "zabbix"
      ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1765808/+subscriptions

Revision history for this message
Ravinder Kumar (rhcayadav) wrote :

@sabbir Hey i have checked this patch in HA setup. working good , and we don't need haproxy for ceph_mgr because its HA is already configured in Ceph. suggestions welcome..

so you can access ceph dashboard(ceph-mgr) will running(active ) on one of controller and remaining will be in standby mode.

(ceph-mon)[root@orioncn1 /]# ceph mgr services
{
    "dashboard": "http://orioncn2:7000/"
}
(ceph-mon)[root@orioncn1 /]# curl http://orioncn2:7000 | wc -l
8755

My testing env having 3 controllers (orioncn1 , orioncn2 , orioncn3 ) 2 and compute ( orioncm1 , orioncm2 ) and 5 osd extened in both controllers and compute

===>>>> here is output below <<<<<========

root@orioncn1:~# docker exec -it ceph_mgr bash
(ceph-mgr)[root@orioncn1 /]# ceph mgr module ls
[errno 2] error connecting to the cluster
(ceph-mgr)[root@orioncn1 /]# exit
exit
root@orioncn1:~# docker exec -it ceph_mon bash
(ceph-mon)[root@orioncn1 /]# ceph mgr module ls
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}

root@orioncn1:~# docker exec -it ceph_mon bash
(ceph-mon)[root@orioncn1 /]# ceph -s
  cluster:
    id: 774f4f6b-8be3-4955-a9c5-c339d5d8c007
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum 192.168.0.8,192.168.0.12,192.168.0.13
    mgr: orioncn2(active), standbys: orioncn3, orioncn1
    osd: 5 osds: 5 up, 5 in

  data:
    pools: 5 pools, 320 pgs
    objects: 0 objects, 0 bytes
    usage: 537 MB used, 134 GB / 134 GB avail
    pgs: 320 active+clean

(ceph-mon)[root@orioncn1 /]# ceph mgr dump
{
    "epoch": 42,
    "active_gid": 4127,
    "active_name": "orioncn2",
    "active_addr": "192.168.0.12:6804/7",
    "available": true,
    "standbys": [
        {
            "gid": 22067,
            "name": "orioncn3",
            "available_modules": [
                "balancer",
                "dashboard",
                "influx",
                "localpool",
                "prometheus",
                "restful",
                "selftest",
                "status",
                "zabbix"
            ]
        },
        {
            "gid": 22359,
            "name": "orioncn1",
            "available_modules": [
                "balancer",
                "dashboard",
                "influx",
                "localpool",
                "prometheus",
                "restful",
                "selftest",
                "status",
                "zabbix"
            ]
        }
    ],
    "modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "available_modules": [
        "balancer",
        "dashboard",
        "influx",
        "localpool",
        "prometheus",
        "restful",
        "selftest",
        "status",
        "zabbix"
    ],
    "services": {
        "dashboard": "http://orioncn2:7000/"
    }
}
(ceph-mon)[root@orioncn1 /]# ceph mgr services
{
    "dashboard": "http://orioncn2:7000/"
}
(ceph-mon)[root@orioncn1 /]# curl http://orioncn2:7000 | wc -l
8755

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Am I doing something wrong?

(ceph-mon)[root@controller01 /]# ceph mgr module ls
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}
(ceph-mon)[root@controller01 /]# ceph mgr services
{
    "dashboard": "http://controller01.example.pvt:7000/"
}

[root@controller01 ~]# curl http://controller01.example.pvt:7000
curl: (7) Failed connect to controller01.example.pvt:7000; Connection refused

[root@oscontroller01 ~]# telnet controller01.example.pvt
Trying 192.168.1.102..
telnet: connect to address 192.168.1.102: Connection refused

Revision history for this message
Ravinder Kumar (rhcayadav) wrote :

As per my understanding you should check created date(in my case its latest 43 hours ago ) of your ceph related images on controller and compute , if it is older ,you should update it and your ceph cluster with new images.

root@orioncn3:~# docker images |grep ceph
REPOSITORY TAG IMAGE ID CREATED SIZE
kolla/ubuntu-binary-ceph-mgr queens c6eafd20e106 43 hours ago 493.9 MB
kolla/ubuntu-binary-ceph-mon queens 163e02834cea 43 hours ago 493.9 MB
kolla/ubuntu-binary-ceph-rgw queens b4d65646c245 43 hours ago 493.9 MB

Revision history for this message
Ravinder Kumar (rhcayadav) wrote :

or please check with <IP>:7000 , because may be your dns is not working properly.

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Nothing wrong with the DNS and I've checked with IP:7000 as well and got the same result.

Looks like you're using ubuntu binary, in my case I'm using CentOS binary.

Also, How do you upgrade docker images using kolla-ansible?

I basically do "kolla-ansible pull" to pull the latest images and "kolla-ansible deploy" to deploy and update the existing inages.

Revision history for this message
Ravinder Kumar (rhcayadav) wrote :

I also checked this on centos , works good in latest queens.

For image upgrade

As per my opinion , if you on test environment >> just remove ceph containers,volumes , than ceph images and than do parted again for osd disk's , so it will formate them .
remove ceph << rm -rvf /etc/kolla/ceph* & rrm -rvf /var/lib/ceph/osd/* >> from all nodes.

docker pull new images and run deploy again

after successfully upgrading ceph ,if your openstack services does not work , restart all openstack services , that will work , this method is verified by me.

Note : i never did it with kolla-ansible upgrade or in production env. so if it is in production env do it on your risk , because usng above method will be data loss 100% .

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Thanks, Ravinder. The environment is a semi prod and can't remove any
container at this moment. I will create another ticket with the kolla
community asking how to upgrade container without removing existing once. I
guess you can close this ticket since the dashboard is working for you.

On Mon, Jun 11, 2018 at 10:31 PM Ravinder Kumar <email address hidden>
wrote:

> I also checked this on centos , works good in latest queens.
>
> For image upgrade
>
> As per my opinion , if you on test environment >> just remove ceph
> containers,volumes , than ceph images and than do parted again for osd
> disk's , so it will formate them .
> remove ceph << rm -rvf /etc/kolla/ceph* & rrm -rvf /var/lib/ceph/osd/* >>
> from all nodes.
>
> docker pull new images and run deploy again
>
>
> after successfully upgrading ceph ,if your openstack services does not
> work , restart all openstack services , that will work , this method is
> verified by me.
>
> Note : i never did it with kolla-ansible upgrade or in production env.
> so if it is in production env do it on your risk , because usng above
> method will be data loss 100% .
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1765808
>
> Title:
> Ceph dashboard isn't working
>
> Status in kolla-ansible:
> In Progress
>
> Bug description:
> Hello,
> I've deployed OpenStack with Ceph as a backend storage
> successfully but for some reason, I can't access ceph dashboard.
>
>
> I tried with enabling the module but no luck.
>
> [root@oscontroller02 ~]# docker exec -it --user root ceph_mon /bin/bash
> (ceph-mon)[root@oscontroller02 /]# ceph mgr module ls
> {
> "enabled_modules": [
> "balancer",
> "dashboard",
> "restful",
> "status"
> ],
> "disabled_modules": [
> "influx",
> "localpool",
> "prometheus",
> "selftest",
> "zabbix"
> ]
> }
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/kolla-ansible/+bug/1765808/+subscriptions
>

Revision history for this message
Sabbir Sakib (sakibsys) wrote :

Thanks, Ravinder. The environment is a semi prod and can't remove any container at this moment. I will create another ticket with the kolla community asking how to upgrade container without removing existing once. I guess you can close this ticket since the dashboard is working for you.

Revision history for this message
Surya Prakash Singh (confisurya) wrote :
Changed in kolla-ansible:
status: In Progress → Invalid
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on kolla-ansible (master)

Change abandoned by Surya Prakash (spsurya) (<email address hidden>) on branch: master
Review: https://review.openstack.org/573083
Reason: https://review.openstack.org/#/c/554405/ pushed for fix earlier

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.