And the following is part of the log(/var/log/puppet/latest/puppet.log):
2020-01-16T09:35:07.689 Debug: 2020-01-16 09:35:07 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/unless: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5707: UserWarning:
2020-01-16T09:35:07.732 Debug: 2020-01-16 09:35:07 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/unless: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5739: UserWarning:
2020-01-16T09:36:15.967 Notice: 2020-01-16 09:36:15 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5707: UserWarning:
2020-01-16T09:36:16.507 Notice: 2020-01-16 09:36:16 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5739: UserWarning:
2020-01-16T09:36:20.485 Notice: 2020-01-16 09:36:20 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5707: UserWarning:
2020-01-16T09:36:20.568 Notice: 2020-01-16 09:36:20 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5739: UserWarning:
2020-01-16T09:36:34.615 Error: 2020-01-16 09:36:34 +0000 yes yes | drbdadm create-md drbd-pgsql -W--peer-max-bio-size=128k returned 40 instead of one of [0]
2020-01-16T09:36:34.843 Error: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Drbd::Pgsql/Platform::Drbd::Filesystem[drbd-pgsql]/Drbd::Resource[drbd-pgsql]/Drbd::Resource::Enable[drbd-pgsql]/Drbd::Resource::Up[drbd-pgsql]/Exec[initialize DRBD metadata for drbd-pgsql]/returns: change from notrun to 0 failed: yes yes | drbdadm create-md drbd-pgsql -W--peer-max-bio-size=128k returned 40 instead of one of [0]
2020-01-16T09:36:34.852 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Drbd::Pgsql/Platform::Drbd::Filesystem[drbd-pgsql]/Drbd::Resource[drbd-pgsql]/Drbd::Resource::Enable[drbd-pgsql]/Drbd::Resource::Up[drbd-pgsql]/Exec[enable DRBD resource drbd-pgsql]: Skipping because of failed dependencies
2020-01-16T09:36:35.052 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Drbd::Service/Service[drbd]: Skipping because of failed dependencies
2020-01-16T09:36:35.094 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Anchors/Anchor[platform::services]: Skipping because of failed dependencies
2020-01-16T09:36:35.122 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Helm/File[/opt/platform/helm_charts]: Skipping because of failed dependencies
2020-01-16T09:36:35.135 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Helm/Exec[restart lighttpd for helm]: Skipping because of failed dependencies
2020-01-16T09:36:35.164 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Helm::Repositories/Platform::Helm::Repository[starlingx]/File[/www/pages/helm_charts/starlingx]: Skipping because of failed dependencies
It seems like that drdb config fails. And I run "drbdadm create-md drbd-pgsql -W--peer-max-bio-size=128k", I get the following result:
Move internal meta data from last-known position?
[need to type 'yes' to confirm] yes
md_offset 21474832384
al_offset 0
bm_offset 0
Found ext3 filesystem
20971520 kB data area apparently used
0 kB left usable by current configuration
Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.
And the following is part of the log(/var/ log/puppet/ latest/ puppet. log):
2020-01- 16T09:35: 07.689 Debug: 2020-01-16 09:35:07 +0000 /Stage[ main]/Platform: :Ceph:: Osds/Platform_ ceph_osd[ stor-1] /Ceph:: Osd[/dev/ disk/by- path/pci- 0000:00: 1f.2-ata- 2.0]/Exec[ ceph-osd- prepare- /dev/disk/ by-path/ pci-0000: 00:1f.2- ata-2.0] /unless: /usr/lib/ python2. 7/site- packages/ ceph_disk/ main.py: 5707: UserWarning: 16T09:35: 07.732 Debug: 2020-01-16 09:35:07 +0000 /Stage[ main]/Platform: :Ceph:: Osds/Platform_ ceph_osd[ stor-1] /Ceph:: Osd[/dev/ disk/by- path/pci- 0000:00: 1f.2-ata- 2.0]/Exec[ ceph-osd- prepare- /dev/disk/ by-path/ pci-0000: 00:1f.2- ata-2.0] /unless: /usr/lib/ python2. 7/site- packages/ ceph_disk/ main.py: 5739: UserWarning: 16T09:36: 15.967 Notice: 2020-01-16 09:36:15 +0000 /Stage[ main]/Platform: :Ceph:: Osds/Platform_ ceph_osd[ stor-1] /Ceph:: Osd[/dev/ disk/by- path/pci- 0000:00: 1f.2-ata- 2.0]/Exec[ ceph-osd- prepare- /dev/disk/ by-path/ pci-0000: 00:1f.2- ata-2.0] /returns: /usr/lib/ python2. 7/site- packages/ ceph_disk/ main.py: 5707: UserWarning: 16T09:36: 16.507 Notice: 2020-01-16 09:36:16 +0000 /Stage[ main]/Platform: :Ceph:: Osds/Platform_ ceph_osd[ stor-1] /Ceph:: Osd[/dev/ disk/by- path/pci- 0000:00: 1f.2-ata- 2.0]/Exec[ ceph-osd- prepare- /dev/disk/ by-path/ pci-0000: 00:1f.2- ata-2.0] /returns: /usr/lib/ python2. 7/site- packages/ ceph_disk/ main.py: 5739: UserWarning: 16T09:36: 20.485 Notice: 2020-01-16 09:36:20 +0000 /Stage[ main]/Platform: :Ceph:: Osds/Platform_ ceph_osd[ stor-1] /Ceph:: Osd[/dev/ disk/by- path/pci- 0000:00: 1f.2-ata- 2.0]/Exec[ ceph-osd- activate- /dev/disk/ by-path/ pci-0000: 00:1f.2- ata-2.0] /returns: /usr/lib/ python2. 7/site- packages/ ceph_disk/ main.py: 5707: UserWarning: 16T09:36: 20.568 Notice: 2020-01-16 09:36:20 +0000 /Stage[ main]/Platform: :Ceph:: Osds/Platform_ ceph_osd[ stor-1] /Ceph:: Osd[/dev/ disk/by- path/pci- 0000:00: 1f.2-ata- 2.0]/Exec[ ceph-osd- activate- /dev/disk/ by-path/ pci-0000: 00:1f.2- ata-2.0] /returns: /usr/lib/ python2. 7/site- packages/ ceph_disk/ main.py: 5739: UserWarning: 16T09:36: 34.615 Error: 2020-01-16 09:36:34 +0000 yes yes | drbdadm create-md drbd-pgsql -W--peer- max-bio- size=128k returned 40 instead of one of [0] 16T09:36: 34.843 Error: 2020-01-16 09:36:34 +0000 /Stage[ main]/Platform: :Drbd:: Pgsql/Platform: :Drbd:: Filesystem[ drbd-pgsql] /Drbd:: Resource[ drbd-pgsql] /Drbd:: Resource: :Enable[ drbd-pgsql] /Drbd:: Resource: :Up[drbd- pgsql]/ Exec[initialize DRBD metadata for drbd-pgsql] /returns: change from notrun to 0 failed: yes yes | drbdadm create-md drbd-pgsql -W--peer- max-bio- size=128k returned 40 instead of one of [0] 16T09:36: 34.852 Warning: 2020-01-16 09:36:34 +0000 /Stage[ main]/Platform: :Drbd:: Pgsql/Platform: :Drbd:: Filesystem[ drbd-pgsql] /Drbd:: Resource[ drbd-pgsql] /Drbd:: Resource: :Enable[ drbd-pgsql] /Drbd:: Resource: :Up[drbd- pgsql]/ Exec[enable DRBD resource drbd-pgsql]: Skipping because of failed dependencies 16T09:36: 35.052 Warning: 2020-01-16 09:36:34 +0000 /Stage[ main]/Drbd: :Service/ Service[ drbd]: Skipping because of failed dependencies 16T09:36: 35.094 Warning: 2020-01-16 09:36:34 +0000 /Stage[ main]/Platform: :Anchors/ Anchor[ platform: :services] : Skipping because of failed dependencies 16T09:36: 35.122 Warning: 2020-01-16 09:36:34 +0000 /Stage[ main]/Platform: :Helm/File[ /opt/platform/ helm_charts] : Skipping because of failed dependencies 16T09:36: 35.135 Warning: 2020-01-16 09:36:34 +0000 /Stage[ main]/Platform: :Helm/Exec[ restart lighttpd for helm]: Skipping because of failed dependencies 16T09:36: 35.164 Warning: 2020-01-16 09:36:34 +0000 /Stage[ main]/Platform: :Helm:: Repositories/ Platform: :Helm:: Repository[ starlingx] /File[/ www/pages/ helm_charts/ starlingx] : Skipping because of failed dependencies
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
2020-01-
It seems like that drdb config fails. And I run "drbdadm create-md drbd-pgsql -W--peer- max-bio- size=128k" , I get the following result:
Move internal meta data from last-known position?
[need to type 'yes' to confirm] yes
md_offset 21474832384
al_offset 0
bm_offset 0
Found ext3 filesystem
20971520 kB data area apparently used
0 kB left usable by current configuration
Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.
Command 'drbdmeta 0 v08 /dev/cgts- vg/pgsql- lv internal create-md --peer- max-bio- size=128k' terminated with exit code 40
I'd like to know that how to config drdb or did I set some config wrong?