Upgrades to Ussuri fail sporadically on Debian (used to fail on Ubuntu too) due to Neutron migrations failing.
I have found the issue. It's a regression in MariaDB. 10.3.29 is the affected version and it got itself into Debian May 09: https://metadata.ftp-master.debian.org/changelogs//main/m/mariadb-10.3/mariadb-10.3_10.3.29-0+deb10u1_changelog (the upstream release was May 07). We use upstream in Ubuntu and then it started failing too. Then it fixed itself in late June as 10.3.30 was emergency-released Jun 23 and pushed to upstream repos. CentOS 8 uses 10.3.28 which hangs right before the regression. 😂
1) early, e.g., CI-805225-1-debian-ebffa0e, CI-805266-1-debian-95ec869
c613d0b82681 -> c3e9d13c4367
c3e9d13c4367 -> 86274d77933e
oslo_db.exception.DBError: (pymysql.err.InternalError) (1054, "Unknown column '`neutron`.`networks_1`.`admin_state_up`' in 'CHECK'")
[SQL: ALTER TABLE networks MODIFY mtu INTEGER NOT NULL DEFAULT '1500']
2) late, e.g., CI-804337-1-debian-aa0c35c, CI-805266-1-debian-dc33623, -274f3f9, and yoctozepto's local
c613d0b82681 -> c3e9d13c4367
...
e4e236b0e1ff -> e88badaa9591
oslo_db.exception.DBError: (pymysql.err.InternalError) (1054, "Unknown column '`neutron`.`subnetpools_1`.`shared`' in 'CHECK'")
[SQL: ALTER TABLE subnetpools ALTER COLUMN shared SET DEFAULT false]
PS: after the investigation, I still wonder how come this issue is random in CI, yet seems permanent in my case (and also, why I'm getting the later variant 2 rather than the early one)
Upgrades to Ussuri fail sporadically on Debian (used to fail on Ubuntu too) due to Neutron migrations failing.
I have found the issue. It's a regression in MariaDB. 10.3.29 is the affected version and it got itself into Debian May 09: https:/ /metadata. ftp-master. debian. org/changelogs/ /main/m/ mariadb- 10.3/mariadb- 10.3_10. 3.29-0+ deb10u1_ changelog (the upstream release was May 07). We use upstream in Ubuntu and then it started failing too. Then it fixed itself in late June as 10.3.30 was emergency-released Jun 23 and pushed to upstream repos. CentOS 8 uses 10.3.28 which hangs right before the regression. 😂
the MariaDB bug report: https:/ /jira.mariadb. org/browse/ MDEV-25672 /mariadb. com/kb/ en/mariadb- 10330-release- notes/
10.3.30 changelog: https:/
Debian maintainers are aware of the regression fixing release since Jun 25 https:/ /bugs.debian. org/cgi- bin/bugreport. cgi?bug= 990306 but they have not released even now (Sep 21)
Issue variants:
1) early, e.g., CI-805225- 1-debian- ebffa0e, CI-805266- 1-debian- 95ec869 exception. DBError: (pymysql. err.InternalErr or) (1054, "Unknown column '`neutron` .`networks_ 1`.`admin_ state_up` ' in 'CHECK'")
c613d0b82681 -> c3e9d13c4367
c3e9d13c4367 -> 86274d77933e
oslo_db.
[SQL: ALTER TABLE networks MODIFY mtu INTEGER NOT NULL DEFAULT '1500']
2) late, e.g., CI-804337- 1-debian- aa0c35c, CI-805266- 1-debian- dc33623, -274f3f9, and yoctozepto's local exception. DBError: (pymysql. err.InternalErr or) (1054, "Unknown column '`neutron` .`subnetpools_ 1`.`shared` ' in 'CHECK'")
c613d0b82681 -> c3e9d13c4367
...
e4e236b0e1ff -> e88badaa9591
oslo_db.
[SQL: ALTER TABLE subnetpools ALTER COLUMN shared SET DEFAULT false]
PS: after the investigation, I still wonder how come this issue is random in CI, yet seems permanent in my case (and also, why I'm getting the later variant 2 rather than the early one)