im torn between considering this a wishlist bug or a feature request.
i think this is related perhaps to the resource provider mapings
with this configuration [devices] enabled_vgpu_types = nvidia-474,nvidia-475,nvidia-476 [vgpu_nvidia-474] device_addresses = 0000:61:00.4,0000:61:01.0 [vgpu_nvidia-475] device_addresses = 0000:61:01.7 [vgpu_nvidia-476] device_addresses = 0000:61:00.6
i would expect there to be 4 resource providers created each with an inventory of 1 vgpu
from the logs below
3cd4dbc7-2c2a-448d-a041-27c8fd685950 7d5abf99-3c42-4c62-ba33-15682c6cfc5b 5e26d9e8-b59a-47b3-879c-c2c50ab7f1f0 58fbbedb-9845-4397-bd20-f559ba68daee
can you do an inventory show on each and confirm that.
looking at the flavor you appear ot have added the correct trait request to have them target the appropriate rps.
the approach you are taking was replaced by the generic mdev feature in xena. https://specs.openstack.org/openstack/nova-specs/specs/xena/implemented/generic-mdevs.html
there instead of tagging the rp manually with a trait you would use a different resource case per mdev type.
you are essically trying to use this feature
https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/vgpu-stein.html
but instead of having multiple; physical gpus you are trying to use mig to partition the GPU first into VFs.
that was intended to be enabled by https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/vgpu-multiple-types.html
however when that feature was implemented no released GPU supported mig or multiple mdev types on the same card.
as such it was only ever tested with multiple mdev type on the same host but with one pGUP per mdev_type
im torn between considering this a wishlist bug or a feature request.
i think this is related perhaps to the resource provider mapings
with this configuration 474,nvidia- 475,nvidia- 476 00.4,0000: 61:01.0
[devices]
enabled_vgpu_types = nvidia-
[vgpu_nvidia-474]
device_addresses = 0000:61:
[vgpu_nvidia-475]
device_addresses = 0000:61:01.7
[vgpu_nvidia-476]
device_addresses = 0000:61:00.6
i would expect there to be 4 resource providers created each with an inventory of 1 vgpu
from the logs below
3cd4dbc7- 2c2a-448d- a041-27c8fd6859 50 3c42-4c62- ba33-15682c6cfc 5b b59a-47b3- 879c-c2c50ab7f1 f0 9845-4397- bd20-f559ba68da ee
7d5abf99-
5e26d9e8-
58fbbedb-
can you do an inventory show on each and confirm that.
looking at the flavor you appear ot have added the correct trait request to have them target the appropriate rps.
the approach you are taking was replaced by the generic mdev feature in xena. /specs. openstack. org/openstack/ nova-specs/ specs/xena/ implemented/ generic- mdevs.html
https:/
there instead of tagging the rp manually with a trait you would use a different resource case per mdev type.
you are essically trying to use this feature
https:/ /specs. openstack. org/openstack/ nova-specs/ specs/stein/ approved/ vgpu-stein. html
but instead of having multiple; physical gpus you are trying to use mig to partition the GPU first into VFs.
that was intended to be enabled by /specs. openstack. org/openstack/ nova-specs/ specs/ussuri/ implemented/ vgpu-multiple- types.html
https:/
however when that feature was implemented no released GPU supported mig or multiple mdev types on the same card.
as such it was only ever tested with multiple mdev type on the same host but with one pGUP per mdev_type