I think your suggestion is a good YAML scheme. I think size_kb: should be optional to fill the whole array with one volume if it's omitted. The number of devices can be looked up by pairing 'container' with 'id'.
The EFI firmware can handle the EFI partition on this kind of RAID, and for this reason it's also no problem for GRUB. Linux of course can use a VFAT on the array. That's one of the big plus of this kind of soft-raid, it gets some integration into the whole system.
Here are mdadm --query --detail outputs for the container and a level 5 array:
/dev/md127:
Version : imsm
Raid Level : container
Total Devices : 4
Working Devices : 4
UUID : ba5ad77a:7618efd1:b178a313:c060a2e7
Member Arrays : /dev/md/126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2930270208 (2794.52 GiB 3000.60 GB)
Used Dev Size : 976756736 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 4
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 5fa06b36:53e67142:37ff9ad6:44ef0e89
Number Major Minor RaidDevice State
3 259 0 0 active sync /dev/nvme1n1
2 259 1 1 active sync /dev/nvme0n1
1 259 3 2 active sync /dev/nvme3n1
0 259 2 3 active sync /dev/nvme2n1
I think your suggestion is a good YAML scheme. I think size_kb: should be optional to fill the whole array with one volume if it's omitted. The number of devices can be looked up by pairing 'container' with 'id'.
The EFI firmware can handle the EFI partition on this kind of RAID, and for this reason it's also no problem for GRUB. Linux of course can use a VFAT on the array. That's one of the big plus of this kind of soft-raid, it gets some integration into the whole system.
Here are mdadm --query --detail outputs for the container and a level 5 array:
/dev/md127:
Version : imsm
Raid Level : container
Total Devices : 4
Working Devices : 4
UUID : ba5ad77a: 7618efd1: b178a313: c060a2e7
Member Arrays : /dev/md/126
Number Major Minor RaidDevice
- 259 2 - /dev/nvme2n1
- 259 0 - /dev/nvme1n1
- 259 1 - /dev/nvme0n1
- 259 3 - /dev/nvme3n1
---------------
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2930270208 (2794.52 GiB 3000.60 GB)
Used Dev Size : 976756736 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 4
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 5fa06b36: 53e67142: 37ff9ad6: 44ef0e89
Number Major Minor RaidDevice State
3 259 0 0 active sync /dev/nvme1n1
2 259 1 1 active sync /dev/nvme0n1
1 259 3 2 active sync /dev/nvme3n1
0 259 2 3 active sync /dev/nvme2n1