Comment 15 for bug 1776888

Revision history for this message
wangwei (wangwei-david) wrote :

Hi Tone,
Since you don't understand why I said that your implementation is not good, then I will tell you carefully:

1. in find_disks.py

 1) First of all, this script is not only used by ceph, but also by swift. Later, there will be other components that will be used. So we want to ensure that each component can be used and isolate its logic as much as possible.
But you added a lot of logic about bluestore in the main function, although there is no problem in function, but it is easy to cause misunderstanding of others. If the user does not know "bluestore", he will not know what "_BS" is.

 2) I think your code for the bluestore disk is too redundant, and each partition has to execute "extract_disk_info_bs" four times.

 3) And if the label is "KOLLA_CEPH_OSD_BOOTSTRAP_BS", your code will only recognize the last one, not every one. This is the first deployment method I mentioned above, and your code does not support it.

 4) In the final result, you returned all the disk information, even if they are empty, I don't think this problem can be ignored, why the bluestore disk information also includes the filestore disk information, ansible support to pass null variables, since we can solve this problem in kolla-ansible, why should we pass these useless variables in the kolla?

2.in extend_start.sh

1) About ceph type code
Bluestore has four type code, but you only listed three, and set the osd type code to the block partition.
https://github.com/ceph/ceph/blob/luminous/udev/95-ceph-osd.rules

2) We all know that journal is the partition of the filestore, so why use USE_EXTERNAL_JOURNAL to determine the partition of the bluestore?
And in your code, as long as it is bluestore, this variable is false, so what is the significance of this variable?

3)You add a lot of logic to determine if it is a loop device, so why not use disk partition variables directly?

4) Finally, about calculating the weight of osd, in the filestore, the size of the osd partition is calculated. In the bluestore, the size of the block partition is calculated, but you have not noticed this problem.

3. in bootstrap_osds.yml

I don't understand why you are passing this parameter. Are you useful to this?

```
OSD_BS_LABEL: "{{ item.1.partition_label | default('') }}"
OSD_BS_BLK_LABEL: "{{ item.1.bs_blk_label | default('') }}"
OSD_BS_WAL_LABEL: "{{ item.1.bs_wal_label | default('') }}"
OSD_BS_DB_LABEL: "{{ item.1.bs_db_label | default('') }}"
```
4. in start_osds.yml
Same problem as above:

```
OSD_BS_FSUUID: "{{ item.1['fs_uuid'] }}"
```

5. in ceph-guide.rst

Your description says that if there are multiple osd on the same node, then the user should add the suffix. I think we should support the case where all the disks on the same node have the label "KOLLA_CEPH_OSD_BOOTSTRAP_BS", which can reduce the trouble of initializing a lot of tags.

Our company started using kolla to deploy ceph from the mitaka version. In the use of ceph, sometimes we will do some custom development. In the version upgrade and bluestore implementation, because of the work requirements, we are ahead of the community. In my understanding, we should treat kolla as a tool, not a product. Our users are developers or maintenance personnel with basic knowledge, not people who don't know anything.
So what Kolla wants to provide is the correct deployment method, and on this basis, it provides users with more freedom. The third deployment method I mentioned is that users can customize the disk to deploy. As for how users divide the disk, it is not something we should consider. I think a normal ceph user will not do some meaningless division in the production environment.

Thanks!