Activity log for bug #1882527

Date Who What changed Old value New value Message
2020-06-08 12:19:38 Warwick Bruce Chapman bug added bug
2020-06-09 03:01:04 Daniel van Vugt affects xorg-server (Ubuntu) mysql-8.0 (Ubuntu)
2020-06-09 03:01:11 Daniel van Vugt tags focal
2020-06-13 10:55:24 Balint Harmath mysql-8.0 (Ubuntu): status New Confirmed
2020-06-15 06:21:26 Rafael David Tinoco bug added subscriber Ubuntu Server
2020-06-15 06:22:18 Rafael David Tinoco nominated for series Ubuntu Eoan
2020-06-15 06:22:18 Rafael David Tinoco bug task added mysql-8.0 (Ubuntu Eoan)
2020-06-15 06:22:18 Rafael David Tinoco nominated for series Ubuntu Bionic
2020-06-15 06:22:18 Rafael David Tinoco bug task added mysql-8.0 (Ubuntu Bionic)
2020-06-15 06:22:18 Rafael David Tinoco nominated for series Ubuntu Groovy
2020-06-15 06:22:18 Rafael David Tinoco bug task added mysql-8.0 (Ubuntu Groovy)
2020-06-15 06:22:18 Rafael David Tinoco nominated for series Ubuntu Focal
2020-06-15 06:22:18 Rafael David Tinoco bug task added mysql-8.0 (Ubuntu Focal)
2020-06-15 06:22:26 Rafael David Tinoco bug task deleted mysql-8.0 (Ubuntu Groovy)
2020-06-15 06:22:31 Rafael David Tinoco mysql-8.0 (Ubuntu Focal): status New Triaged
2020-06-15 06:22:33 Rafael David Tinoco mysql-8.0 (Ubuntu Eoan): status New Triaged
2020-06-15 06:22:35 Rafael David Tinoco mysql-8.0 (Ubuntu Bionic): status New Triaged
2020-06-15 06:22:39 Rafael David Tinoco mysql-8.0 (Ubuntu Bionic): importance Undecided Medium
2020-06-15 06:22:41 Rafael David Tinoco mysql-8.0 (Ubuntu Eoan): importance Undecided Medium
2020-06-15 06:22:43 Rafael David Tinoco mysql-8.0 (Ubuntu Focal): importance Undecided Medium
2020-06-15 06:22:48 Rafael David Tinoco mysql-8.0 (Ubuntu): importance Undecided High
2020-06-15 06:22:56 Rafael David Tinoco tags focal focal server-next
2020-06-15 15:23:00 Robie Basak tags focal server-next bitesize focal server-next
2020-08-18 16:59:19 Brian Murray mysql-8.0 (Ubuntu Eoan): status Triaged Won't Fix
2020-12-17 10:57:55 Koen bug watch added http://bugs.mysql.com/bug.php?id=91423
2020-12-17 10:58:48 Koen mysql-8.0 (Ubuntu): assignee Koen (koen-beek)
2020-12-17 10:59:03 Koen bug added subscriber Koen
2020-12-17 18:47:45 Koen mysql-8.0 (Ubuntu): assignee Koen (koen-beek)
2020-12-21 00:58:42 Koen description MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ``` MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ```
2021-02-13 18:15:56 Launchpad Janitor mysql-8.0 (Ubuntu): status Confirmed Fix Released
2021-08-17 13:12:14 Paride Legovini mysql-8.0 (Ubuntu Bionic): assignee Paride Legovini (paride)
2021-08-17 13:12:16 Paride Legovini mysql-8.0 (Ubuntu Focal): assignee Paride Legovini (paride)
2021-08-17 14:58:48 Paride Legovini description MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ``` [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Case] [Where problems could occur] [Development Fix] [Stable Fix] [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ```
2021-08-18 17:21:17 Paride Legovini description [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Case] [Where problems could occur] [Development Fix] [Stable Fix] [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ``` [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Plan] [Where problems could occur] The TimeoutSec=infinity syntax is supported by the systemd versions in all the supported releases of Ubuntu, so this won't be a problem. Then only change in behavior due to this change will happen on systems where the timeout is reached, and mysql is thus killed. In these cases the database server wouldn't be running in any case, but there could be cases of bad or overgrown databases (e.g. because of a runaway script adding infinite data) where the timeout is doing the right thing, preventing mysql from consuming system resources. In these already broken systems TimeoutSec=infinity may increase the breakage. This won't affect working production systems. [Development Fix] [Stable Fix] The same fix is already already landed in Hirsute, Impish and Debian unstable. [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ```
2021-08-18 17:59:19 Paride Legovini description [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Plan] [Where problems could occur] The TimeoutSec=infinity syntax is supported by the systemd versions in all the supported releases of Ubuntu, so this won't be a problem. Then only change in behavior due to this change will happen on systems where the timeout is reached, and mysql is thus killed. In these cases the database server wouldn't be running in any case, but there could be cases of bad or overgrown databases (e.g. because of a runaway script adding infinite data) where the timeout is doing the right thing, preventing mysql from consuming system resources. In these already broken systems TimeoutSec=infinity may increase the breakage. This won't affect working production systems. [Development Fix] [Stable Fix] The same fix is already already landed in Hirsute, Impish and Debian unstable. [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ``` [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Plan] This is probably the most interesting bit of the SRU. In order to test this for real, as opposed to testing systemd's TimeoutSec, we need to make mysql very slow when it loads its tables. One way it to actually have huge tables, but I have no idea on how big they'd need to be. The other way is to slow down access to /var/lib/mysql at the I/O level. Something on these lines: apt install mysql-server-8.0 systemctl stop mysql cd /var/lib mv mysql mysql.bak truncate -s 300M mysql.blk losetup --show --find mysql.blk dmsetup create slowdev --table \ "0 $(blockdev --getsz /dev/loopX) delay /dev/loopX 0 100" # With /dev/loopX as printed by losetup, 100 = 100ms r/w delay # See: https://www.kernel.org/doc/Documentation/device-mapper/delay.txt mkfs.ext4 /dev/mapper/slowdev mkdir mysql mount /dev/mapper/slowdev mysql chown mysql:mysql mysql cp -av mysql.bak/* mysql systemctl start mysql # slow! By tuning the delay parameter it is possible to trigger the timeout with the pre-SRU package, and verify that post-SRU it can load in more than 10 minutes. Note: this can be tested in an LXD VM but it requires booting the linux-image-generic kernel, as linux-image-kvm doesn't ship the dm delay target. The lxd-agent won't work, just ssh-import-id and ssh in. (I think this is overkill for this SRU, but it's a quite general way to make stuff slow. I'm sure it will come useful in other cases!) [Where problems could occur] The TimeoutSec=infinity syntax is supported by the systemd versions in all the supported releases of Ubuntu, so this won't be a problem. Then only change in behavior due to this change will happen on systems where the timeout is reached, and mysql is thus killed. In these cases the database server wouldn't be running in any case, but there could be cases of bad or overgrown databases (e.g. because of a runaway script adding infinite data) where the timeout is doing the right thing, preventing mysql from consuming system resources. In these already broken systems TimeoutSec=infinity may increase the breakage. This won't affect working production systems. [Development Fix] [Stable Fix] The same fix is already already landed in Hirsute, Impish and Debian unstable. [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ```
2021-08-20 10:06:12 Paride Legovini description [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Plan] This is probably the most interesting bit of the SRU. In order to test this for real, as opposed to testing systemd's TimeoutSec, we need to make mysql very slow when it loads its tables. One way it to actually have huge tables, but I have no idea on how big they'd need to be. The other way is to slow down access to /var/lib/mysql at the I/O level. Something on these lines: apt install mysql-server-8.0 systemctl stop mysql cd /var/lib mv mysql mysql.bak truncate -s 300M mysql.blk losetup --show --find mysql.blk dmsetup create slowdev --table \ "0 $(blockdev --getsz /dev/loopX) delay /dev/loopX 0 100" # With /dev/loopX as printed by losetup, 100 = 100ms r/w delay # See: https://www.kernel.org/doc/Documentation/device-mapper/delay.txt mkfs.ext4 /dev/mapper/slowdev mkdir mysql mount /dev/mapper/slowdev mysql chown mysql:mysql mysql cp -av mysql.bak/* mysql systemctl start mysql # slow! By tuning the delay parameter it is possible to trigger the timeout with the pre-SRU package, and verify that post-SRU it can load in more than 10 minutes. Note: this can be tested in an LXD VM but it requires booting the linux-image-generic kernel, as linux-image-kvm doesn't ship the dm delay target. The lxd-agent won't work, just ssh-import-id and ssh in. (I think this is overkill for this SRU, but it's a quite general way to make stuff slow. I'm sure it will come useful in other cases!) [Where problems could occur] The TimeoutSec=infinity syntax is supported by the systemd versions in all the supported releases of Ubuntu, so this won't be a problem. Then only change in behavior due to this change will happen on systems where the timeout is reached, and mysql is thus killed. In these cases the database server wouldn't be running in any case, but there could be cases of bad or overgrown databases (e.g. because of a runaway script adding infinite data) where the timeout is doing the right thing, preventing mysql from consuming system resources. In these already broken systems TimeoutSec=infinity may increase the breakage. This won't affect working production systems. [Development Fix] [Stable Fix] The same fix is already already landed in Hirsute, Impish and Debian unstable. [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ``` [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Plan] This is probably the most interesting bit of the SRU. In order to test this for real, as opposed to testing systemd's TimeoutSec, we need to make mysql very slow when it loads its tables. One way it to actually have huge tables, but I have no idea on how big they'd need to be. The other way is to slow down access to /var/lib/mysql at the I/O level. Something on these lines:   apt install mysql-server-8.0   systemctl stop mysql   cd /var/lib   mv mysql mysql.bak   truncate -s 300M mysql.blk   losetup --show --find mysql.blk   dmsetup create slowdev --table \     "0 $(blockdev --getsz /dev/loopX) delay /dev/loopX 0 100"   # With /dev/loopX as printed by losetup, 100 = 100ms r/w delay   # See: https://www.kernel.org/doc/Documentation/device-mapper/delay.txt   mkfs.ext4 /dev/mapper/slowdev   mkdir mysql   mount /dev/mapper/slowdev mysql   chown mysql:mysql mysql   cp -av mysql.bak/* mysql   time systemctl start mysql # slow! By tuning the delay parameter it is possible to trigger the timeout with the pre-SRU package, and verify that post-SRU it can load in more than 10 minutes. Note: this can be tested in an LXD VM but it requires booting the linux-image-generic kernel, as linux-image-kvm doesn't ship the dm delay target. The lxd-agent won't work, just ssh-import-id and ssh in. (I think this is overkill for this SRU, but it's a quite general way to make stuff slow. I'm sure it will come useful in other cases!) [Where problems could occur] The TimeoutSec=infinity syntax is supported by the systemd versions in all the supported releases of Ubuntu, so this won't be a problem. Then only change in behavior due to this change will happen on systems where the timeout is reached, and mysql is thus killed. In these cases the database server wouldn't be running in any case, but there could be cases of bad or overgrown databases (e.g. because of a runaway script adding infinite data) where the timeout is doing the right thing, preventing mysql from consuming system resources. In these already broken systems TimeoutSec=infinity may increase the breakage. This won't affect working production systems. [Development Fix] [Stable Fix] The same fix is already already landed in Hirsute, Impish and Debian unstable. [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ```
2021-08-20 10:07:57 Paride Legovini bug task added mysql-5.7 (Ubuntu)
2021-08-20 10:08:10 Paride Legovini bug task deleted mysql-5.7 (Ubuntu Eoan)
2021-08-20 10:08:18 Paride Legovini bug task deleted mysql-5.7 (Ubuntu Focal)
2021-08-20 10:08:41 Paride Legovini bug task deleted mysql-5.7 (Ubuntu Bionic)
2021-08-20 10:08:47 Paride Legovini mysql-5.7 (Ubuntu): status New Triaged
2021-08-20 10:08:54 Paride Legovini bug task deleted mysql-8.0 (Ubuntu Bionic)
2021-08-20 10:08:59 Paride Legovini mysql-5.7 (Ubuntu): assignee Paride Legovini (paride)
2021-08-20 10:30:03 Paride Legovini description [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >= Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Plan] This is probably the most interesting bit of the SRU. In order to test this for real, as opposed to testing systemd's TimeoutSec, we need to make mysql very slow when it loads its tables. One way it to actually have huge tables, but I have no idea on how big they'd need to be. The other way is to slow down access to /var/lib/mysql at the I/O level. Something on these lines:   apt install mysql-server-8.0   systemctl stop mysql   cd /var/lib   mv mysql mysql.bak   truncate -s 300M mysql.blk   losetup --show --find mysql.blk   dmsetup create slowdev --table \     "0 $(blockdev --getsz /dev/loopX) delay /dev/loopX 0 100"   # With /dev/loopX as printed by losetup, 100 = 100ms r/w delay   # See: https://www.kernel.org/doc/Documentation/device-mapper/delay.txt   mkfs.ext4 /dev/mapper/slowdev   mkdir mysql   mount /dev/mapper/slowdev mysql   chown mysql:mysql mysql   cp -av mysql.bak/* mysql   time systemctl start mysql # slow! By tuning the delay parameter it is possible to trigger the timeout with the pre-SRU package, and verify that post-SRU it can load in more than 10 minutes. Note: this can be tested in an LXD VM but it requires booting the linux-image-generic kernel, as linux-image-kvm doesn't ship the dm delay target. The lxd-agent won't work, just ssh-import-id and ssh in. (I think this is overkill for this SRU, but it's a quite general way to make stuff slow. I'm sure it will come useful in other cases!) [Where problems could occur] The TimeoutSec=infinity syntax is supported by the systemd versions in all the supported releases of Ubuntu, so this won't be a problem. Then only change in behavior due to this change will happen on systems where the timeout is reached, and mysql is thus killed. In these cases the database server wouldn't be running in any case, but there could be cases of bad or overgrown databases (e.g. because of a runaway script adding infinite data) where the timeout is doing the right thing, preventing mysql from consuming system resources. In these already broken systems TimeoutSec=infinity may increase the breakage. This won't affect working production systems. [Development Fix] [Stable Fix] The same fix is already already landed in Hirsute, Impish and Debian unstable. [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ``` [Impact] mysql-server-5.7.mysql.service (bionic) and mysql-server-8.0.mysql.service (focal) have a TimeoutSec=600. This has the effect of killing the MySQL process if this timeout is reached. Very large databases can exceed the 600s timeout, and a safe tradeoff between timing out at some point and waiting long enough to accommodate large/huge databases does not seem to exist. This issue has been fixed in Debian and in Ubuntu >=Hirsute by disabling the timeout (TimeoutSec=infinity). [Test Plan] This is probably the most interesting bit of the SRU :-) In order to test this for real, as opposed to testing systemd's TimeoutSec, we need to make mysql very slow when it loads its tables. One way it to actually have huge tables, but I have no idea on how big they'd need to be. The other way is to slow down access to /var/lib/mysql at the I/O level. Something on these lines:   apt install mysql-server-8.0   systemctl stop mysql   cd /var/lib   mv mysql mysql.bak   truncate -s 300M mysql.blk   losetup --show --find mysql.blk   dmsetup create slowdev --table \     "0 $(blockdev --getsz /dev/loopX) delay /dev/loopX 0 100"   # With /dev/loopX as printed by losetup, 100 = 100ms r/w delay   # See: https://www.kernel.org/doc/Documentation/device-mapper/delay.txt   mkfs.ext4 /dev/mapper/slowdev   mkdir mysql   mount /dev/mapper/slowdev mysql   chown mysql:mysql mysql   cp -av mysql.bak/* mysql   time systemctl start mysql # slow! systemctl status mysql By tuning the delay parameter it is possible to trigger the timeout with the pre-SRU package, and verify that post-SRU it can load in more than 10 minutes. Note: this can be tested in an LXD VM but it requires booting the linux-image-generic kernel, as linux-image-kvm doesn't ship the dm delay target. The lxd-agent won't work, just ssh-import-id and ssh in. (I think this is overkill for this SRU, but it's a quite general way to make stuff slow. I'm sure it will come useful in other cases!) [Where problems could occur] The TimeoutSec=infinity syntax is supported by the systemd versions in all the supported releases of Ubuntu, so this won't be a problem. Then only change in behavior due to this change will happen on systems where the timeout is reached, and mysql is thus killed. In these cases the database server wouldn't be running in any case, but there could be cases of bad or overgrown databases (e.g. because of a runaway script adding infinite data) where the timeout is doing the right thing, preventing mysql from consuming system resources forever. In these already broken systems TimeoutSec=infinity may increase the breakage. This won't affect working production systems. [Development Fix] [Stable Fix] The same fix is already already landed in Hirsute, Impish and Debian unstable. [Original Description] MySQL on 20.04 has TimeoutSec set to 600 (IIRC) in the systemd script. This has the effect of killing the MySQL process if this timeout is reached. IMHO this is a Very Bad Idea. A database server process should only be force killed by a user action. I would prefer that the server had unlimited time to cleanly shutdown and startup (eg if recovering). Our DB is about 500GB with some very large tables (for us at least) eg. 250GB and we've had more than a few unfortunate delays as a result of delayed startup caused by recoveries because MySQL was killed prematurely. Because MySQL 8.0 has reduced the default logging level, it was not clear to me that the process was being force killed. I believe the MySQL team are of the same view as me per https://bugs.mysql.com/bug.php?id=91423: ``` [12 Jul 2019 15:57] Paul Dubois Posted by developer: Fixed in 8.0.18. On Debian, long InnoDB recovery times at startup could cause systemd service startup failure. The default systemd service timeout is now disabled (consistent with RHEL) to prevent this from happening. ```
2021-08-20 10:36:02 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paride/ubuntu/+source/mysql-5.7/+git/mysql-5.7/+merge/407446
2021-08-20 10:36:48 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paride/ubuntu/+source/mysql-8.0/+git/mysql-8.0/+merge/407447
2021-08-24 14:54:39 Robie Basak mysql-5.7 (Ubuntu Bionic): status New Fix Committed
2021-08-24 14:54:40 Robie Basak bug added subscriber Ubuntu Stable Release Updates Team
2021-08-24 14:54:42 Robie Basak bug added subscriber SRU Verification
2021-08-24 14:54:44 Robie Basak tags bitesize focal server-next bitesize focal server-next verification-needed verification-needed-bionic
2021-08-31 09:51:09 Paride Legovini mysql-5.7 (Ubuntu Bionic): assignee Paride Legovini (paride)
2021-08-31 10:15:19 Paride Legovini tags bitesize focal server-next verification-needed verification-needed-bionic bitesize focal server-next verification-done-bionic verification-needed
2021-08-31 21:09:09 Brian Murray mysql-8.0 (Ubuntu Focal): status Triaged Fix Committed
2021-08-31 21:09:15 Brian Murray tags bitesize focal server-next verification-done-bionic verification-needed bitesize focal server-next verification-done-bionic verification-needed verification-needed-focal
2021-09-01 09:02:12 Paride Legovini tags bitesize focal server-next verification-done-bionic verification-needed verification-needed-focal bitesize focal server-next verification-done verification-done-bionic verification-done-focal
2021-09-13 07:32:12 Łukasz Zemczak removed subscriber Ubuntu Stable Release Updates Team
2021-09-13 07:32:44 Launchpad Janitor mysql-5.7 (Ubuntu Bionic): status Fix Committed Fix Released
2021-09-28 20:03:56 Launchpad Janitor mysql-8.0 (Ubuntu Focal): status Fix Committed Fix Released
2021-09-29 15:07:12 Paride Legovini mysql-5.7 (Ubuntu): status Triaged Fix Released