uprecords reports >100% uptime

Bug #498439 reported by Robert C. Sheets
54
This bug affects 10 people
Affects Status Importance Assigned to Milestone
uptimed (Debian)
Fix Released
Unknown
uptimed (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

Binary package hint: uptimed

On one of my servers, I get the following uprecords output:

     # Uptime | System Boot up
----------------------------+---------------------------------------------------
     1 26 days, 20:32:54 | Linux 2.6.31-14-server Fri Nov 13 04:51:43 2009
     2 20 days, 07:49:13 | Linux 2.6.31-14-server Fri Oct 23 22:00:10 2009
     3 6 days, 01:06:15 | Linux 2.6.31-16-server Thu Dec 10 01:20:55 2009
-> 4 2 days, 21:07:21 | Linux 2.6.31-16-server Wed Dec 16 02:26:44 2009
----------------------------+---------------------------------------------------
1up in 3 days, 03:58:55 | at Tue Dec 22 03:32:58 2009
no1 in 23 days, 23:25:34 | at Mon Jan 11 22:59:37 2010
    up 56 days, 02:35:43 | since Fri Oct 23 22:00:10 2009
  down 0 days, 00:-01:-48 | since Fri Oct 23 22:00:10 2009
   %up 100.002 | since Fri Oct 23 22:00:10 2009

The last two lines are the problem. I'm not sure why, but uptimed thinks it has been down for a negative amount of time, and reports an uptime percentage over 100%. Clearly that is wrong. Unfortunately I have no idea why. I will happily provide whatever other information may be helpful.

'lsb_release -rd' outputs this:
Description: Ubuntu 9.10
Release: 9.10

'apt-cache policy uptimed' outputs this:
uptimed:
  Installed: 1:0.3.16-3
  Candidate: 1:0.3.16-3
  Version table:
 *** 1:0.3.16-3 0
        500 http://us.archive.ubuntu.com karmic/universe Packages
        100 /var/lib/dpkg/status

Revision history for this message
shawnlandden (shawnlandden) wrote :

whats going on here is a difference between the monotonic time reported by /proc/uptime and the hwclock which does a few of the other numbers (not sure specifics) . I think this all just means your monotonic (cpu-based) clock is running alittle fast compared to the RTC (battery-powered clock). What you are experiencing is minor compared to debbug: ? which is that if ntpdate sets the clock back, you end up with duplicated entries in uprecords and uptime can go WAY up due to these double records.

Revision history for this message
shawnlandden (shawnlandden) wrote :

the way towards fixing this bug would involve

clock_gettime(CLOCK_MONOTONIC_RAW, const struct timespec *monotonictime);

which is linux 2.6.28+ specific, and is not subject to change due to ntpdate and friends.

see man 2 clock_gettime

I'm not sure we should just replace sysinfo->uptime however, cause the inaccuracy of hardware clocks...

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in uptimed (Ubuntu):
status: New → Confirmed
Revision history for this message
Pieter (diepes) wrote :

I have the same problem, running on a KVM vm.
Is there no easy way just to zero negative values ?

Do i understand the problem correctly that by the time the server reboots, the clock has moved forward, and then during the reboot it is reset (moving back in time) and then when the server comes up it logs negative uptime ?

root:~# uprecords
     # Uptime | System Boot up
----------------------------+---------------------------------------------------
-> 1 12 days, 21:29:37 | Linux 3.13.0-24-generic Tue May 6 10:04:07 2014
     2 12 days, 21:21:05 | Linux 3.13.0-24-generic Tue May 6 10:03:42 2014
     3 9 days, 12:39:58 | Linux 3.13.0-23-generic Wed Apr 9 12:44:30 2014
     4 8 days, 21:50:07 | Linux 3.13.0-23-generic Sat Apr 19 01:25:08 2014
     5 6 days, 14:53:29 | Linux 3.13.0-20-generic Wed Apr 2 21:50:35 2014
     6 5 days, 07:00:40 | Linux 3.13.0-24-generic Sun Apr 27 23:16:57 2014
     7 3 days, 03:32:53 | Linux 3.13.0-24-generic Sat May 3 06:29:28 2014
----------------------------+---------------------------------------------------
NewRec 0 days, 00:08:31 | since Mon May 19 07:25:11 2014
    up 59 days, 06:47:49 | since Wed Apr 2 21:50:35 2014
  down -12 days, -21:-04: | since Wed Apr 2 21:50:35 2014
   %up 127.752 | since Wed Apr 2 21:50:35 2014
root:~#

root:~# apt-cache policy uptimed
uptimed:
  Installed: 1:0.3.17-4
  Candidate: 1:0.3.17-4
  Version table:
 *** 1:0.3.17-4 0

root:~# lsb_release -rd
Description: Ubuntu 14.04 LTS
Release: 14.04

Revision history for this message
RS (ro2ert) wrote :

I have the same issue on a raspberry pi:

pi@raspberrypi:~$ uprecords
     # Uptime | System Boot up
----------------------------+---------------------------------------------------
     1 403 days, 04:46:21 | Linux 3.12.31+ Thu Nov 6 09:15:54 2014
-> 2 0 days, 18:30:41 | Linux 4.1.7+ Mon Dec 14 13:16:53 2015
----------------------------+---------------------------------------------------
no1 in 402 days, 10:15:41 | at Fri Jan 20 18:03:13 2017
    up 403 days, 23:17:02 | since Thu Nov 6 09:15:54 2014
  down 0 days, 00:-45:-22 | since Thu Nov 6 09:15:54 2014
   %up 100.008 | since Thu Nov 6 09:15:54 2014

Changed in uptimed (Debian):
status: Unknown → Confirmed
Revision history for this message
no!chance (ralf-fehlau) wrote :

This bug is present in the current release of ubuntu:

     # Uptime | System Boot up
----------------------------+---------------------------------------------------
     1 12 days, 18:05:05 | Linux 4.4.0-92-generic Wed Aug 16 15:41:17 2017
     2 2 days, 23:32:00 | Linux 4.4.0-91-generic Sun Aug 13 16:07:14 2017
-> 3 0 days, 00:49:53 | Linux 4.4.0-93-generic Tue Aug 29 09:47:24 2017
     4 0 days, 00:40:13 | Linux 4.4.0-93-generic Tue Aug 29 09:47:25 2017
----------------------------+---------------------------------------------------
1up in 2 days, 22:42:08 | at Fri Sep 1 09:19:24 2017
no1 in 12 days, 17:15:13 | at Mon Sep 11 03:52:29 2017
    up 15 days, 19:07:11 | since Sun Aug 13 16:07:14 2017
  down 0 days, 00:-46:-47 | since Sun Aug 13 16:07:14 2017
   %up 100.206 | since Sun Aug 13 16:07:14 2017

Revision history for this message
Johan Andersson (meffe) wrote :

I've also encountered this bug in current release of CentOS 7.4.1708 (3.10.0-693.11.1.el7.x86_64).
I'm running uptimed on a few units as well (Ubuntu 17.10 4.13.0-16-generic & MacOSX 10.11.6 [15G17023] 15.6.0) - and I havn't had any problems with negative uptime reports from those.
This problem/bug must be something that isn't always showing, since it hasnt been fixed yet. I believe there's something 'else' that is interacting when it shouldn't - that isn't something default in the system. I hope we, or someone, can get to the bottom of this problem.

Thank you.

     # Uptime | System Boot up
----------------------------+---------------------------------------------------
     1 62 days, 04:57:20 | Linux 3.10.0-693.2.2.el7 Sat Sep 30 10:45:46 2017
     2 10 days, 21:01:42 | Linux 3.10.0-693.5.2.el7 Fri Dec 1 14:41:04 2017
-> 3 4 days, 00:17:23 | Linux 3.10.0-693.11.1.el Tue Dec 12 11:45:33 2017
     4 4 days, 00:17:11 | Linux 3.10.0-693.11.1.el Tue Dec 12 11:42:55 2017
----------------------------+---------------------------------------------------
1up in 6 days, 20:44:20 | at Sat Dec 23 08:47:15 2017
no1 in 58 days, 04:39:58 | at Mon Feb 12 16:42:53 2018
    up 81 days, 02:33:36 | since Sat Sep 30 10:45:46 2017
  down -4 days, 00:-16:-26 | since Sat Sep 30 10:45:46 2017
   %up 105.203 | since Sat Sep 30 10:45:46 2017

Changed in uptimed (Debian):
status: Confirmed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.