Just to clarify for whoever is going to read this.
Fix-released here does not mean that you will get consistent /dev/bcache<n> with what was configured in MAAS.
E.g. /dev/bcache0 seen in MAAS and in generated curtin file may well end up being, e.g. /dev/bcache3, on a booted system as enumeration is not guaranteed.
$ tree /dev/disk/by-dname/
/dev/disk/by-dname/
├── bcache0 -> ../../bcache4 # should be bcache3 to consistently use
├── bcache1 -> ../../bcache0
├── bcache2 -> ../../bcache1
├── bcache3 -> ../../bcache2
├── bcache4 -> ../../bcache3
├── sda -> ../../sda
├── sda-part1 -> ../../sda1
├── sda-part2 -> ../../sda2
└── sda-part3 -> ../../sda3
0 directories, 9 files
View 3 - bcache0 is bcache3 (actual enumeration result):
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 0 476M 0 part /boot/efi
├─sda2 8:2 0 1.9G 0 part /boot
└─sda3 8:3 0 3.7T 0 part
└─bcache3 250:3 0 3.7T 0 disk /
sdb 8:16 0 3.7T 0 disk
└─bcache2 250:2 0 3.7T 0 disk
sdc 8:32 0 3.7T 0 disk
└─bcache1 250:1 0 3.7T 0 disk
sdd 8:48 0 3.7T 0 disk
└─bcache0 250:0 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
└─bcache4 250:4 0 3.7T 0 disk
sr0 11:0 1 1024M 0 rom
loop0 7:0 0 83M 1 loop /snap/core/3017
loop1 7:1 0 4.4M 1 loop /snap/canonical-livepatch/26
nvme0n1 259:0 0 1.8T 0 disk
├─bcache0 250:0 0 3.7T 0 disk
├─bcache1 250:1 0 3.7T 0 disk
├─bcache2 250:2 0 3.7T 0 disk
├─bcache3 250:3 0 3.7T 0 disk /
└─bcache4 250:4 0 3.7T 0 disk
The only thing you can address reliably is UUID of a file system on a given device because you don't use a device name in /etc/fstab.
This limits what can be done for applications that need reliable raw block device (ceph) names as a file system has to be pre-created triggering usage of a "directory mode" by ceph where both "data device" and "journal device" in case of filestore become files on a file system you pre-create affecting performance.
We need to do something about it if we are to use bcache for bluestore which requires raw block devices to get certain performance characteristics - the whole idea was to remove a file system from the equation and with that randomness we have to specify it beforehand.
Just to clarify for whoever is going to read this.
Fix-released here does not mean that you will get consistent /dev/bcache<n> with what was configured in MAAS.
E.g. /dev/bcache0 seen in MAAS and in generated curtin file may well end up being, e.g. /dev/bcache3, on a booted system as enumeration is not guaranteed.
https:/ /wiki.ubuntu. com/ServerTeam/ Bcache
"Warning! If you have multiple bcache devices, you NEED to use UUID, enumeration of bcache devices is not guaranteed."
/dev/disk/ by-dname/ bcache< i> symlinks would supposedly provide you with a solution but not really.
View 1 (curtin) - bcache0 is as we configured:
http:// paste.ubuntu. com/25774666/
- backing_device: sda-part3
cache_device: nvme0n1
cache_mode: writeback
id: bcache0
name: bcache0
type: bcache
View 2 (by-dname) - bcache0 is bcache4:
$ tree /dev/disk/by-dname/
/dev/disk/by-dname/
├── bcache0 -> ../../bcache4 # should be bcache3 to consistently use
├── bcache1 -> ../../bcache0
├── bcache2 -> ../../bcache1
├── bcache3 -> ../../bcache2
├── bcache4 -> ../../bcache3
├── sda -> ../../sda
├── sda-part1 -> ../../sda1
├── sda-part2 -> ../../sda2
└── sda-part3 -> ../../sda3
0 directories, 9 files
View 3 - bcache0 is bcache3 (actual enumeration result):
$ lsblk -livepatch/ 26
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 0 476M 0 part /boot/efi
├─sda2 8:2 0 1.9G 0 part /boot
└─sda3 8:3 0 3.7T 0 part
└─bcache3 250:3 0 3.7T 0 disk /
sdb 8:16 0 3.7T 0 disk
└─bcache2 250:2 0 3.7T 0 disk
sdc 8:32 0 3.7T 0 disk
└─bcache1 250:1 0 3.7T 0 disk
sdd 8:48 0 3.7T 0 disk
└─bcache0 250:0 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
└─bcache4 250:4 0 3.7T 0 disk
sr0 11:0 1 1024M 0 rom
loop0 7:0 0 83M 1 loop /snap/core/3017
loop1 7:1 0 4.4M 1 loop /snap/canonical
nvme0n1 259:0 0 1.8T 0 disk
├─bcache0 250:0 0 3.7T 0 disk
├─bcache1 250:1 0 3.7T 0 disk
├─bcache2 250:2 0 3.7T 0 disk
├─bcache3 250:3 0 3.7T 0 disk /
└─bcache4 250:4 0 3.7T 0 disk
The only thing you can address reliably is UUID of a file system on a given device because you don't use a device name in /etc/fstab.
This limits what can be done for applications that need reliable raw block device (ceph) names as a file system has to be pre-created triggering usage of a "directory mode" by ceph where both "data device" and "journal device" in case of filestore become files on a file system you pre-create affecting performance.
We need to do something about it if we are to use bcache for bluestore which requires raw block devices to get certain performance characteristics - the whole idea was to remove a file system from the equation and with that randomness we have to specify it beforehand.