Activity log for bug #1934849

Date Who What changed Old value New value Message
2021-07-07 04:53:31 Nobuto Murata bug added bug
2021-07-07 04:57:38 Nobuto Murata attachment added glance-api.log https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1934849/+attachment/5509534/+files/glance-api.log
2021-07-07 04:57:55 Nobuto Murata bug task added glance-store
2021-07-07 13:46:37 Launchpad Janitor python-glance-store (Ubuntu): status New Confirmed
2021-07-07 13:46:53 Alexander Litvinov bug added subscriber Alexander Litvinov
2021-07-07 16:56:26 OpenStack Infra glance-store: status New In Progress
2021-07-07 19:58:30 Vladimir Grevtsev bug added subscriber Vladimir Grevtsev
2021-07-09 18:10:16 OpenStack Infra glance-store: status In Progress Fix Released
2021-07-12 03:22:37 Nobuto Murata bug added subscriber Canonical Field High
2021-07-12 08:22:45 David Negreira bug added subscriber David Negreira
2021-07-13 08:48:14 Dominique Poulain bug added subscriber Dominique Poulain
2021-07-19 16:02:09 Corey Bryant description I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. for backend in swift s3; do for i in {8,16,32,64,128,512}; do dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync time glance image-create \ --store $backend \ --file my-image.img --name my-image \ --disk-format raw --container-format bare \ --progress done done [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag nft_masq nft_chain_nat bridge stp llc vhost_vsock vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul cirrus ghash_clmulni_intel drm_kms_helper virtio_net syscopyarea aesni_intel sysfillrect sysimgblt fb_sys_fops crypto_simd cryptd drm virtio_blk glue_helper net_failover psmouse failover floppy i2c_piix4 pata_acpi ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron: TERM=screen-256color PATH=(custom, no user) LANG=C.UTF-8 SHELL=/bin/bash SourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install) [Impact] [Test Case] I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. for backend in swift s3; do     for i in {8,16,32,64,128,512}; do         dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync         time glance image-create \             --store $backend \             --file my-image.img --name my-image \             --disk-format raw --container-format bare \             --progress     done done [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag nft_masq nft_chain_nat bridge stp llc vhost_vsock vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul cirrus ghash_clmulni_intel drm_kms_helper virtio_net syscopyarea aesni_intel sysfillrect sysimgblt fb_sys_fops crypto_simd cryptd drm virtio_blk glue_helper net_failover psmouse failover floppy i2c_piix4 pata_acpi ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron:  TERM=screen-256color  PATH=(custom, no user)  LANG=C.UTF-8  SHELL=/bin/bash SourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install)
2021-07-19 18:36:20 Corey Bryant bug task added cloud-archive
2021-07-19 18:36:34 Corey Bryant nominated for series cloud-archive/wallaby
2021-07-19 18:36:34 Corey Bryant bug task added cloud-archive/wallaby
2021-07-19 18:36:34 Corey Bryant nominated for series cloud-archive/victoria
2021-07-19 18:36:34 Corey Bryant bug task added cloud-archive/victoria
2021-07-19 18:36:34 Corey Bryant nominated for series cloud-archive/ussuri
2021-07-19 18:36:34 Corey Bryant bug task added cloud-archive/ussuri
2021-07-19 18:36:42 Corey Bryant cloud-archive: status New Fix Released
2021-07-19 18:36:45 Corey Bryant cloud-archive/ussuri: status New Triaged
2021-07-19 18:36:47 Corey Bryant cloud-archive/victoria: status New Triaged
2021-07-19 18:36:50 Corey Bryant cloud-archive/wallaby: status New Triaged
2021-07-19 18:36:51 Corey Bryant cloud-archive/wallaby: importance Undecided High
2021-07-19 18:36:54 Corey Bryant cloud-archive/victoria: importance Undecided High
2021-07-19 18:36:55 Corey Bryant cloud-archive/ussuri: importance Undecided High
2021-07-19 18:37:52 Corey Bryant nominated for series Ubuntu Hirsute
2021-07-19 18:37:52 Corey Bryant bug task added python-glance-store (Ubuntu Hirsute)
2021-07-19 18:37:52 Corey Bryant nominated for series Ubuntu Focal
2021-07-19 18:37:52 Corey Bryant bug task added python-glance-store (Ubuntu Focal)
2021-07-19 18:38:00 Corey Bryant python-glance-store (Ubuntu): status Confirmed Fix Released
2021-07-19 18:38:03 Corey Bryant python-glance-store (Ubuntu Focal): status New Triaged
2021-07-19 18:38:05 Corey Bryant python-glance-store (Ubuntu Hirsute): status New Triaged
2021-07-19 18:38:08 Corey Bryant python-glance-store (Ubuntu Hirsute): importance Undecided High
2021-07-19 18:38:11 Corey Bryant python-glance-store (Ubuntu Focal): importance Undecided High
2021-07-19 18:42:06 Corey Bryant bug added subscriber Ubuntu Stable Release Updates Team
2021-07-20 03:59:20 Nobuto Murata description [Impact] [Test Case] I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. for backend in swift s3; do     for i in {8,16,32,64,128,512}; do         dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync         time glance image-create \             --store $backend \             --file my-image.img --name my-image \             --disk-format raw --container-format bare \             --progress     done done [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag nft_masq nft_chain_nat bridge stp llc vhost_vsock vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul cirrus ghash_clmulni_intel drm_kms_helper virtio_net syscopyarea aesni_intel sysfillrect sysimgblt fb_sys_fops crypto_simd cryptd drm virtio_blk glue_helper net_failover psmouse failover floppy i2c_piix4 pata_acpi ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron:  TERM=screen-256color  PATH=(custom, no user)  LANG=C.UTF-8  SHELL=/bin/bash SourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install) [Impact] Glance with S3 backend cannot accept image uploads in a realistic time frame. For example, an 1GB image upload takes ~60 minutes although other backends such as swift can complete it with 10 seconds. [Test Plan] 1. Deploy a partial OpenStack with multiple Glance backends including S3 (zaza test bundles can be used with "ceph" which will set up "rbd", "swift", and "s3" backends - https://opendev.org/openstack/charm-glance/src/branch/master/tests/tests.yaml) 2. Upload multiple images with variety of sizes 3. confirm the duration of uploading images are shorter in general after applying the updated package (expected duration of 1GB is from ~60 minutes to 1-3 minutes) for backend in ceph swift s3; do echo "[$backend]" for i in {0,3,5,9,10,128,512,1024}; do dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync echo "${i}MiB" time glance image-create \ --store $backend \ --file my-image.img --name "my-image-${backend}-${i}MiB" \ --disk-format raw --container-format bare \ --progress done done [Where problems could occur] Since we bump WRITE_CHUNKSIZE from 64KiB to 5MiB, there might be a case where image uploads fail if the size of the image is less than WRITE_CHUNKSIZE. Or there might be an unexpected latency in the worst case scenario. We will try to address the concerns by testing multiple images uploads with multiple sizes including some corner cases as follows: - 0 - zero - 3MiB - less than the new WRITE_CHUNKSIZE(5MiB) - 5MiB - exactly same as the new WRITE_CHUNKSIZE(5MiB) - 9MiB - bigger than new WRITE_CHUNKSIZE(5MiB) but less than twice - 10MiB - exactly twice as the new WRITE_CHUNKSIZE(5MiB) - 128MiB, 512MiB, 1024MiB - some large images ==== I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: BugDistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron:  TERM=screen-256color  PATH=(custom, no user)  LANG=C.UTF-8  SHELL=/bin/bashSourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install)
2021-07-20 04:12:20 Nobuto Murata description [Impact] Glance with S3 backend cannot accept image uploads in a realistic time frame. For example, an 1GB image upload takes ~60 minutes although other backends such as swift can complete it with 10 seconds. [Test Plan] 1. Deploy a partial OpenStack with multiple Glance backends including S3 (zaza test bundles can be used with "ceph" which will set up "rbd", "swift", and "s3" backends - https://opendev.org/openstack/charm-glance/src/branch/master/tests/tests.yaml) 2. Upload multiple images with variety of sizes 3. confirm the duration of uploading images are shorter in general after applying the updated package (expected duration of 1GB is from ~60 minutes to 1-3 minutes) for backend in ceph swift s3; do echo "[$backend]" for i in {0,3,5,9,10,128,512,1024}; do dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync echo "${i}MiB" time glance image-create \ --store $backend \ --file my-image.img --name "my-image-${backend}-${i}MiB" \ --disk-format raw --container-format bare \ --progress done done [Where problems could occur] Since we bump WRITE_CHUNKSIZE from 64KiB to 5MiB, there might be a case where image uploads fail if the size of the image is less than WRITE_CHUNKSIZE. Or there might be an unexpected latency in the worst case scenario. We will try to address the concerns by testing multiple images uploads with multiple sizes including some corner cases as follows: - 0 - zero - 3MiB - less than the new WRITE_CHUNKSIZE(5MiB) - 5MiB - exactly same as the new WRITE_CHUNKSIZE(5MiB) - 9MiB - bigger than new WRITE_CHUNKSIZE(5MiB) but less than twice - 10MiB - exactly twice as the new WRITE_CHUNKSIZE(5MiB) - 128MiB, 512MiB, 1024MiB - some large images ==== I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: BugDistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron:  TERM=screen-256color  PATH=(custom, no user)  LANG=C.UTF-8  SHELL=/bin/bashSourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install) [Impact] [Test Case] I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. for backend in swift s3; do     for i in {8,16,32,64,128,512}; do         dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync         /usr/bin/time --format=%E glance image-create \             --store $backend \             --file my-image.img --name my-image \             --disk-format raw --container-format bare \             --progress     done done [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: BugDistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag nft_masq nft_chain_nat bridge stp llc vhost_vsock vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul cirrus ghash_clmulni_intel drm_kms_helper virtio_net syscopyarea aesni_intel sysfillrect sysimgblt fb_sys_fops crypto_simd cryptd drm virtio_blk glue_helper net_failover psmouse failover floppy i2c_piix4 pata_acpi ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron:  TERM=screen-256color  PATH=(custom, no user)  LANG=C.UTF-8  SHELL=/bin/bashSourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install)
2021-07-20 17:26:58 Nobuto Murata description [Impact] [Test Case] I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. for backend in swift s3; do     for i in {8,16,32,64,128,512}; do         dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync         /usr/bin/time --format=%E glance image-create \             --store $backend \             --file my-image.img --name my-image \             --disk-format raw --container-format bare \             --progress     done done [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: BugDistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag nft_masq nft_chain_nat bridge stp llc vhost_vsock vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul cirrus ghash_clmulni_intel drm_kms_helper virtio_net syscopyarea aesni_intel sysfillrect sysimgblt fb_sys_fops crypto_simd cryptd drm virtio_blk glue_helper net_failover psmouse failover floppy i2c_piix4 pata_acpi ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron:  TERM=screen-256color  PATH=(custom, no user)  LANG=C.UTF-8  SHELL=/bin/bashSourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install) [Impact] Glance with S3 backend cannot accept image uploads in a realistic time frame. For example, an 1GB image upload takes ~60 minutes although other backends such as swift can complete it with 10 seconds. [Test Plan] 1. Deploy a partial OpenStack with multiple Glance backends including S3 (zaza test bundles can be used with "ceph" which will set up "rbd", "swift", and "s3" backends - https://opendev.org/openstack/charm-glance/src/branch/master/tests/tests.yaml) 2. Upload multiple images with variety of sizes 3. confirm the duration of uploading images are shorter in general after applying the updated package (expected duration of 1GB is from ~60 minutes to 1-3 minutes) for backend in ceph swift s3; do echo "[$backend]" for i in {0,3,5,9,10,128,512,1024}; do dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync echo "${i}MiB" time glance image-create \ --store $backend \ --file my-image.img --name "my-image-${backend}-${i}MiB" \ --disk-format raw --container-format bare \ --progress done done [Where problems could occur] Since we bump WRITE_CHUNKSIZE from 64KiB to 5MiB, there might be a case where image uploads fail if the size of the image is less than WRITE_CHUNKSIZE. Or there might be an unexpected latency in the worst case scenario. We will try to address the concerns by testing multiple images uploads with multiple sizes including some corner cases as follows: - 0 - zero - 3MiB - less than the new WRITE_CHUNKSIZE(5MiB) - 5MiB - exactly same as the new WRITE_CHUNKSIZE(5MiB) - 9MiB - bigger than new WRITE_CHUNKSIZE(5MiB) but less than twice - 10MiB - exactly twice as the new WRITE_CHUNKSIZE(5MiB) - 128MiB, 512MiB, 1024MiB - some large images ==== I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 ProblemType: Bug DistroRelease: Ubuntu 20.04 ProblemType: BugDistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron: TERM=screen-256color PATH=(custom, no user) LANG=C.UTF-8 SHELL=/bin/bash SourcePackage: python-glance-store SHELL=/bin/bashSourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install)
2021-07-21 16:06:33 Robie Basak python-glance-store (Ubuntu Hirsute): status Triaged Fix Committed
2021-07-21 16:06:36 Robie Basak bug added subscriber SRU Verification
2021-07-21 16:06:41 Robie Basak tags amd64 apport-bug focal uec-images amd64 apport-bug focal uec-images verification-needed verification-needed-hirsute
2021-07-21 16:07:04 Robie Basak python-glance-store (Ubuntu Focal): status Triaged Fix Committed
2021-07-21 16:07:09 Robie Basak tags amd64 apport-bug focal uec-images verification-needed verification-needed-hirsute amd64 apport-bug focal uec-images verification-needed verification-needed-focal verification-needed-hirsute
2021-07-21 17:44:43 Corey Bryant cloud-archive/victoria: status Triaged Fix Committed
2021-07-21 17:44:45 Corey Bryant tags amd64 apport-bug focal uec-images verification-needed verification-needed-focal verification-needed-hirsute amd64 apport-bug focal uec-images verification-needed verification-needed-focal verification-needed-hirsute verification-victoria-needed
2021-07-21 18:26:32 Corey Bryant cloud-archive/wallaby: status Triaged Fix Committed
2021-07-21 18:26:34 Corey Bryant tags amd64 apport-bug focal uec-images verification-needed verification-needed-focal verification-needed-hirsute verification-victoria-needed amd64 apport-bug focal uec-images verification-needed verification-needed-focal verification-needed-hirsute verification-victoria-needed verification-wallaby-needed
2021-07-21 18:26:36 Corey Bryant cloud-archive/ussuri: status Triaged Fix Committed
2021-07-26 17:46:09 Nobuto Murata tags amd64 apport-bug focal uec-images verification-needed verification-needed-focal verification-needed-hirsute verification-victoria-needed verification-wallaby-needed amd64 apport-bug focal uec-images verification-done-hirsute verification-needed verification-needed-focal verification-victoria-needed verification-wallaby-needed
2021-07-26 17:48:37 Nobuto Murata tags amd64 apport-bug focal uec-images verification-done-hirsute verification-needed verification-needed-focal verification-victoria-needed verification-wallaby-needed amd64 apport-bug focal uec-images verification-done-focal verification-done-hirsute verification-needed verification-victoria-needed verification-wallaby-needed
2021-07-27 00:38:41 Nobuto Murata tags amd64 apport-bug focal uec-images verification-done-focal verification-done-hirsute verification-needed verification-victoria-needed verification-wallaby-needed amd64 apport-bug focal uec-images verification-done-focal verification-done-hirsute verification-needed verification-victoria-needed verification-wallaby-done
2021-07-27 12:51:00 Nobuto Murata tags amd64 apport-bug focal uec-images verification-done-focal verification-done-hirsute verification-needed verification-victoria-needed verification-wallaby-done amd64 apport-bug focal uec-images verification-done-focal verification-done-hirsute verification-needed verification-ussuri-done verification-victoria-needed verification-wallaby-done
2021-07-27 16:34:01 Nobuto Murata tags amd64 apport-bug focal uec-images verification-done-focal verification-done-hirsute verification-needed verification-ussuri-done verification-victoria-needed verification-wallaby-done amd64 apport-bug focal uec-images verification-done verification-done-focal verification-done-hirsute verification-ussuri-done verification-victoria-done verification-wallaby-done
2021-07-29 19:14:14 Launchpad Janitor python-glance-store (Ubuntu Hirsute): status Fix Committed Fix Released
2021-07-29 19:14:19 Brian Murray removed subscriber Ubuntu Stable Release Updates Team
2021-07-29 19:15:02 Launchpad Janitor python-glance-store (Ubuntu Focal): status Fix Committed Fix Released
2021-07-30 12:28:18 Corey Bryant cloud-archive/wallaby: status Fix Committed Fix Released
2021-07-30 12:28:23 Corey Bryant cloud-archive/victoria: status Fix Committed Fix Released
2021-07-30 12:28:34 Corey Bryant cloud-archive/ussuri: status Fix Committed Fix Released