Ceph placement groups number is not set after deployment

Bug #1504489 reported by Pawel Stefanski
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
In Progress
High
Kyrylo Galanov

Bug Description

On Ceph deployment pools are created with incorrect number of placement groups.

Fuel version:
api: '1.0'
astute_sha: 6c5b73f93e24cc781c809db9159927655ced5012
auth_required: true
build_id: '301'
build_number: '301'
feature_groups:
- experimental
- mirantis
fuel-agent_sha: 50e90af6e3d560e9085ff71d2950cfbcca91af67
fuel-library_sha: 5d50055aeca1dd0dc53b43825dc4c8f7780be9dd
fuel-nailgun-agent_sha: d7027952870a35db8dc52f185bb1158cdd3d1ebd
fuel-ostf_sha: 2cd967dccd66cfc3a0abd6af9f31e5b4d150a11c
fuelmain_sha: a65d453215edb0284a2e4761be7a156bb5627677
nailgun_sha: 4162b0c15adb425b37608c787944d1983f543aa8
openstack_version: 2015.1.0-7.0
production: docker
python-fuelclient_sha: 486bde57cda1badb68f915f66c61b544108606f3
release: '7.0'
release_versions:
  2015.1.0-7.0:
    VERSION:
      api: '1.0'
      astute_sha: 6c5b73f93e24cc781c809db9159927655ced5012
      build_id: '301'
      build_number: '301'
      feature_groups:
      - experimental
      - mirantis
      fuel-agent_sha: 50e90af6e3d560e9085ff71d2950cfbcca91af67
      fuel-library_sha: 5d50055aeca1dd0dc53b43825dc4c8f7780be9dd
      fuel-nailgun-agent_sha: d7027952870a35db8dc52f185bb1158cdd3d1ebd
      fuel-ostf_sha: 2cd967dccd66cfc3a0abd6af9f31e5b4d150a11c
      fuelmain_sha: a65d453215edb0284a2e4761be7a156bb5627677
      nailgun_sha: 4162b0c15adb425b37608c787944d1983f543aa8
      openstack_version: 2015.1.0-7.0
      production: docker
      python-fuelclient_sha: 486bde57cda1badb68f915f66c61b544108606f3
      release: '7.0'

Ceph default configuration with 12 osd's.

Information from astute:
storage:
  iser: false
  volumes_ceph: true
  per_pool_pg_nums:
    compute: 256
    default_pg_num: 64
    volumes: 512
    images: 64
    backups: 128
    ".rgw": 128
  objects_ceph: true
  ephemeral_ceph: true
  volumes_lvm: false
  images_vcenter: false
  osd_pool_size: '3'
  pg_num: 64
  images_ceph: true
  metadata:
    weight: 60
    label: Storage

But the deployed configuration is:
ceph osd dump | awk '/pool/ { print $3 " " $14 }'
'data' 64
'metadata' 64
'rbd' 64
'images' 64
'volumes' 64
'backups' 64
'.rgw.root' 64
'.rgw.control' 64
'.rgw' 64
'.rgw.gc' 64
'.users.uid' 64
'compute' 64
'.users' 64

Tags: area-mos ceph
Revision history for this message
Dmitry Klenov (dklenov) wrote :

Pawel, can you please also attach diagnostic snapshot?

Changed in fuel:
assignee: nobody → MOS Ceph (mos-ceph)
milestone: none → 8.0
importance: Undecided → High
status: New → Incomplete
Revision history for this message
Pawel Stefanski (pejotes) wrote :

After next deployment I will attach it.

This bug should be patched in next 7.0 update.

As a fix procedure for already deployed environment, pg_num can be changed by issuing.
ceph osd pool set {pool-name} pg_num {pg_num}

Dmitry Pyzhov (dpyzhov)
tags: added: area-mos
Revision history for this message
Roman Podoliaka (rpodolyaka) wrote :

Pawel, can we close this bug now?

Changed in fuel:
status: Incomplete → Confirmed
assignee: MOS Ceph (mos-ceph) → Pawel Stefanski (pejotes)
Revision history for this message
Kyrylo Galanov (kgalanov) wrote :

Hi,

The issue may be caused by the fact that patch https://review.openstack.org/#/c/204811/12 was not merged.

--
Kyrylo

Revision history for this message
Kyrylo Galanov (kgalanov) wrote :
Changed in fuel:
assignee: Pawel Stefanski (pejotes) → Kyrylo Galanov (kgalanov)
status: Confirmed → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.