process start times offset by host uptime

Bug #1699010 reported by Paul Collins
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
lxd (Ubuntu)
Fix Released
Undecided
Unassigned

Bug Description

I noticed that the process start times in my LXDs appear to be offset by host's uptime at the time the container was booted. For example:

[agnew(~)] uptime
 16:18:12 up 5 days, 6:15, 15 users, load average: 0.29, 0.36, 0.51
[agnew(mojo-vault)] lxc launch ubuntu:16.04
Creating prime-shiner
Starting prime-shiner
[agnew(~)] lxc exec prime-shiner /bin/su -
mesg: ttyname failed: Success
root@prime-shiner:~# ps -o lstart 1
                 STARTED
Sun Jun 25 10:34:52 2017
root@prime-shiner:~# date
Tue Jun 20 04:18:56 UTC 2017
root@prime-shiner:~# date -d 'now + 5 days 6 hours 15 minutes'
Sun Jun 25 10:34:27 UTC 2017
root@prime-shiner:~# _

And in a container that was rebooted yesterday:

[agnew(~)] lxc exec openstack /bin/su -
mesg: ttyname failed: Success
root@openstack:~# date
Tue Jun 20 04:22:42 UTC 2017
root@openstack:~# ps -o lstart 1
                 STARTED
Fri Jun 23 06:09:34 2017
root@openstack:~# ps -o lstart $$
                 STARTED
Sat Jun 24 08:26:11 2017
root@openstack:~# _

Version information:

[agnew(~)] lsb_release -rc
Release: 17.04
Codename: zesty
[agnew(~)] uname -a
Linux agnew 4.10.0-22-generic #24-Ubuntu SMP Mon May 22 17:43:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[agnew(~)] dpkg-query -W linux-image-$(uname -r) lxd
linux-image-4.10.0-22-generic 4.10.0-22.24
lxd 2.12-0ubuntu3

Revision history for this message
Paul Collins (pjdc) wrote :

My kernel above is downrev, but this also happens with the current zesty kernel:

[agnew(~)] uname -a
Linux agnew 4.10.0-24-generic #28-Ubuntu SMP Wed Jun 14 08:14:34 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[agnew(~)] uptime
 16:47:57 up 5 min, 3 users, load average: 0.46, 1.13, 0.63
[agnew(~)] lxc exec openstack /bin/su -
mesg: ttyname failed: Success
root@openstack:~# date ; ps -o lstart $$
Tue Jun 20 04:48:00 UTC 2017
                 STARTED
Tue Jun 20 04:48:45 2017
root@openstack:~# logout

The container was started on boot, so 45 seconds sounds about right.

And then if I restart the container, the offset becomes larger:

[agnew(~)] lxc restart openstack
[agnew(~)] lxc exec openstack /bin/su -
mesg: ttyname failed: Success
root@openstack:~# date ; ps -o lstart $$
Tue Jun 20 04:48:43 UTC 2017
                 STARTED
Tue Jun 20 04:54:46 2017
root@openstack:~# logout

Revision history for this message
Christian Brauner (cbrauner) wrote : Re: [Bug 1699010] Re: process start times offset by host uptime
Download full text (3.3 KiB)

On Tue, Jun 20, 2017 at 04:50:22AM -0000, Paul Collins wrote:
> My kernel above is downrev, but this also happens with the current zesty
> kernel:
>
> [agnew(~)] uname -a
> Linux agnew 4.10.0-24-generic #28-Ubuntu SMP Wed Jun 14 08:14:34 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> [agnew(~)] uptime
> 16:47:57 up 5 min, 3 users, load average: 0.46, 1.13, 0.63
> [agnew(~)] lxc exec openstack /bin/su -
> mesg: ttyname failed: Success
> root@openstack:~# date ; ps -o lstart $$
> Tue Jun 20 04:48:00 UTC 2017
> STARTED
> Tue Jun 20 04:48:45 2017
> root@openstack:~# logout
>
> The container was started on boot, so 45 seconds sounds about right.
>
> And then if I restart the container, the offset becomes larger:
>
> [agnew(~)] lxc restart openstack
> [agnew(~)] lxc exec openstack /bin/su -
> mesg: ttyname failed: Success
> root@openstack:~# date ; ps -o lstart $$
> Tue Jun 20 04:48:43 UTC 2017
> STARTED
> Tue Jun 20 04:54:46 2017
> root@openstack:~# logout

Yeah, we are aware of the issue. This is caused by virtualizing the "btime"
field in /proc/stat which is used by ps to calculate the correct stime relative
to the host's uptime and the container's starttime. Since we don't virtualize
the container's starttime as this would effectively mean virtualizing proc we
have decided to revert the "btime" virtualization patch for now. We'll likely
include this in the next round of SRUs.

>
> --
> You received this bug notification because you are a member of Ubuntu
> containers team, which is subscribed to lxd in Ubuntu.
> Matching subscriptions: lxd
> https://bugs.launchpad.net/bugs/1699010
>
> Title:
> process start times offset by host uptime
>
> Status in lxd package in Ubuntu:
> New
>
> Bug description:
> I noticed that the process start times in my LXDs appear to be offset
> by host's uptime at the time the container was booted. For example:
>
> [agnew(~)] uptime
> 16:18:12 up 5 days, 6:15, 15 users, load average: 0.29, 0.36, 0.51
> [agnew(mojo-vault)] lxc launch ubuntu:16.04
> Creating prime-shiner
> Starting prime-shiner
> [agnew(~)] lxc exec prime-shiner /bin/su -
> mesg: ttyname failed: Success
> root@prime-shiner:~# ps -o lstart 1
> STARTED
> Sun Jun 25 10:34:52 2017
> root@prime-shiner:~# date
> Tue Jun 20 04:18:56 UTC 2017
> root@prime-shiner:~# date -d 'now + 5 days 6 hours 15 minutes'
> Sun Jun 25 10:34:27 UTC 2017
> root@prime-shiner:~# _
>
> And in a container that was rebooted yesterday:
>
> [agnew(~)] lxc exec openstack /bin/su -
> mesg: ttyname failed: Success
> root@openstack:~# date
> Tue Jun 20 04:22:42 UTC 2017
> root@openstack:~# ps -o lstart 1
> STARTED
> Fri Jun 23 06:09:34 2017
> root@openstack:~# ps -o lstart $$
> STARTED
> Sat Jun 24 08:26:11 2017
> root@openstack:~# _
>
> Version information:
>
> [agnew(~)] lsb_release -rc
> Release: 17.04
> Codename: zesty
> [agnew(~)] uname -a
> Linux agnew 4.10.0-22-generic #24-Ubuntu SMP Mon May 22 17:43:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> [agnew(~)] dpkg-query -W linux-image-$(uname -r) l...

Read more...

Revision history for this message
Stéphane Graber (stgraber) wrote :

An updated lxcfs was pushed to artful and will be included in the next stable release for LXCFS too.

Changed in lxd (Ubuntu):
status: New → Fix Released
Revision history for this message
Adrien Cunin (adri2000) wrote :

Isn't it more an LXC/LXCFS bug?

I think I've hit this in 16.04, what is the plan regarding SRUing?

Revision history for this message
Adrien Cunin (adri2000) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.