[1.x,2.x] memory leak in python-tx-tftp integration

Bug #1472338 reported by Larry Michel
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
MAAS
Invalid
High
Unassigned
python-tx-tftp
New
Undecided
Unassigned

Bug Description

We've hit a few failed to release errors while builds were releasing:

ubuntu@maas-trusty-back-may22:/var/log/maas$ grep -i "releasing to failed" /var/log/maas/maas.log /var/log/maas/maas.log.1
/var/log/maas/maas.log:Jul 7 13:04:58 maas-trusty-back-may22 maas.node: [INFO] hayward-11: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:04:58 maas-trusty-back-may22 maas.node: [INFO] hayward-7: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:04:58 maas-trusty-back-may22 maas.node: [INFO] hayward-36: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:04:58 maas-trusty-back-may22 maas.node: [INFO] hayward-16: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:04:59 maas-trusty-back-may22 maas.node: [INFO] hayward-04: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] noma.local: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] kobusch.local: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] hayward-19: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] hayward-13: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] everitt.local: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] hayward-15: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] hayward-18: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] emerson.local: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:32 maas-trusty-back-may22 maas.node: [INFO] everitt.local: Status transition from RELEASING to FAILED_RELEASING
/var/log/maas/maas.log:Jul 7 13:17:33 maas-trusty-back-may22 maas.node: [INFO] hayward-43: Status transition from RELEASING to FAILED_RELEASING
ubuntu@maas-trusty-back-may22:/var/log/maas$ head -n 1 /var/log/maas/maas.log.1
Jul 6 06:40:41 maas-trusty-back-may22 maas.thread: [INFO] 9193 queue=0, waiters=10, working=0
ubuntu@maas-trusty-back-may22:/var/log/maas$

In the clusterd.log, we see these exceptions:

2015-07-07 13:00:02+0000 [TFTP (UDP)] Datagram received from ('10.245.0.233', 2073): <RRQDatagram(filename=pxelinux.0, mode=octet, options={'tsize': '0'})>
2015-07-07 13:00:02+0000 [-] Unhandled error in Deferred:
2015-07-07 13:00:02+0000 [-] Unhandled Error
        Traceback (most recent call last):
          File "/usr/lib/python2.7/dist-packages/tftp/protocol.py", line 67, in _startSession
            context, self.backend.get_reader, datagram.filename)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
            return self.currentContext().callWithContext(ctx, func, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
            return func(*args,**kw)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/twisted.py", line 68, in wrapper
            return maybeDeferred(func, *args, **kwargs)
        --- <exception caught here> ---
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 139, in maybeDeferred
            result = f(*args, **kw)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/pserv_services/tftp.py", line 254, in get_reader
            mac_address = get_remote_mac()
          File "/usr/lib/python2.7/dist-packages/provisioningserver/boot/__init__.py", line 108, in get_remote_mac
            return find_mac_via_arp(remote_host)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/network.py", line 101, in find_mac_via_arp
            output = call_and_check(['ip', 'neigh'], env={'LC_ALL': 'C'})
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/shell.py", line 142, in call_and_check
            process = Popen(command, *args, stdout=PIPE, stderr=PIPE, **kwargs)
          File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
            errread, errwrite)
          File "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child
            self.pid = os.fork()
        exceptions.OSError: [Errno 12] Cannot allocate memory

2015-07-07 13:00:10+0000 [TFTP (UDP)] Datagram received from ('10.245.0.233', 2074): <RRQDatagram(filename=pxelinux.0, mode=octet, options={'tsize': '0'})>
2015-07-07 13:00:10+0000 [-] Unhandled error in Deferred:
2015-07-07 13:00:10+0000 [-] Unhandled Error
        Traceback (most recent call last):
          File "/usr/lib/python2.7/dist-packages/tftp/protocol.py", line 67, in _startSession
            context, self.backend.get_reader, datagram.filename)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
            return self.currentContext().callWithContext(ctx, func, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
            return func(*args,**kw)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/twisted.py", line 68, in wrapper
            return maybeDeferred(func, *args, **kwargs)
        --- <exception caught here> ---
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 139, in maybeDeferred
            result = f(*args, **kw)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/pserv_services/tftp.py", line 254, in get_reader
            mac_address = get_remote_mac()
          File "/usr/lib/python2.7/dist-packages/provisioningserver/boot/__init__.py", line 108, in get_remote_mac
            return find_mac_via_arp(remote_host)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/network.py", line 101, in find_mac_via_arp
            output = call_and_check(['ip', 'neigh'], env={'LC_ALL': 'C'})
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/shell.py", line 142, in call_and_check
            process = Popen(command, *args, stdout=PIPE, stderr=PIPE, **kwargs)
          File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
            errread, errwrite)
          File "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child
            self.pid = os.fork()
        exceptions.OSError: [Errno 12] Cannot allocate memory

and a lot more of these starting at 13:07::

2015-07-07 13:06:01+0000 [-] Stopping protocol <tftp.bootstrap.RemoteOriginReadSession instance at 0x7f9188626050>
2015-07-07 13:07:26+0000 [-] Failed to refresh power state.
        Traceback (most recent call last):
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 423, in errback
            self._startRunCallbacks(fail)
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1155, in gotResult
            _inlineCallbacks(r, g, deferred)
        --- <exception caught here> ---
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1097, in _inlineCallbacks
            result = result.throwExceptionIntoGenerator(g)
          File "/usr/lib/python2.7/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
            return g.throw(self.type, self.value, self.tb)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/rpc/power.py", line 340, in get_power_state
            perform_power_query, system_id, hostname, power_type, context)
          File "/usr/lib/python2.7/dist-packages/twisted/python/threadpool.py", line 191, in _worker
            result = context.call(ctx, function, *args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
            return self.currentContext().callWithContext(ctx, func, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
            return func(*args,**kw)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/twisted.py", line 154, in wrapper
            return func(*args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/rpc/power.py", line 313, in perform_power_query
            return action.execute(power_change='query', **context)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/power/poweraction.py", line 139, in execute
            context = self.update_context(context)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/power/poweraction.py", line 95, in update_context
            ip_address = find_ip_via_arp(mac_address)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/network.py", line 77, in find_ip_via_arp
            output = call_and_check(['arp', '-n']).split('\n')
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/shell.py", line 142, in call_and_check
            process = Popen(command, *args, stdout=PIPE, stderr=PIPE, **kwargs)
          File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
            errread, errwrite)
          File "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child
            self.pid = os.fork()
        exceptions.OSError: [Errno 12] Cannot allocate memory

2015-07-07 13:07:26+0000 [-] Failed to refresh power state.
        Traceback (most recent call last):
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 423, in errback
            self._startRunCallbacks(fail)
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1155, in gotResult
            _inlineCallbacks(r, g, deferred)
        --- <exception caught here> ---
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1097, in _inlineCallbacks
            result = result.throwExceptionIntoGenerator(g)
          File "/usr/lib/python2.7/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
            return g.throw(self.type, self.value, self.tb)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/rpc/power.py", line 340, in get_power_state
            perform_power_query, system_id, hostname, power_type, context)
          File "/usr/lib/python2.7/dist-packages/twisted/python/threadpool.py", line 191, in _worker
            result = context.call(ctx, function, *args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
            return self.currentContext().callWithContext(ctx, func, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
            return func(*args,**kw)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/twisted.py", line 154, in wrapper
            return func(*args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/rpc/power.py", line 313, in perform_power_query
            return action.execute(power_change='query', **context)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/power/poweraction.py", line 139, in execute
            context = self.update_context(context)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/power/poweraction.py", line 95, in update_context
            ip_address = find_ip_via_arp(mac_address)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/network.py", line 77, in find_ip_via_arp
            output = call_and_check(['arp', '-n']).split('\n')
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/shell.py", line 142, in call_and_check
            process = Popen(command, *args, stdout=PIPE, stderr=PIPE, **kwargs)
          File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
            errread, errwrite)
          File "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child
            self.pid = os.fork()
        exceptions.OSError: [Errno 12] Cannot allocate memory

This is the pmap -d output for maas processes:

https://pastebin.canonical.com/134758/

Tags: oil
Revision history for this message
Larry Michel (lmic) wrote :
Revision history for this message
Larry Michel (lmic) wrote :

We are recreating this on a weekly basis it seems. We had another occurence yesterday. The previous one was the week before.

Revision history for this message
Larry Michel (lmic) wrote :

Every time we've recreated, the memory was maxed up and we've had to restart clusterd and regiond.

Revision history for this message
Larry Michel (lmic) wrote :

Issue recreated today with 1.8.1. Failure to check power state continues past restarting the services then past restarting the server itself.

Revision history for this message
Larry Michel (lmic) wrote :

In the last couple of recreates I checked the wrong setting for memory usage. I should have been looking on the host for a container setting under /sys/fs/cgroup/memory/ rather then using memory stat commands like free. So I now have a process running to that checks memory usage every minute for the container and so the next time the refresh power state errors are re-created, we can get a better idea of memory usage patterns and what direct correlation if any exists.

This is what the data would look like:

Monitoring memory usage for maas container...
Recording memory usage in bytes at Sat Aug 22 00:42:30 UTC 2015
5802795008
Recording memory usage in bytes at Sat Aug 22 00:43:30 UTC 2015
5964120064
Recording memory usage in bytes at Sat Aug 22 00:44:30 UTC 2015
5921673216
Recording memory usage in bytes at Sat Aug 22 00:45:30 UTC 2015
5943955456
Recording memory usage in bytes at Sat Aug 22 00:46:30 UTC 2015
5792681984
Recording memory usage in bytes at Sat Aug 22 00:47:30 UTC 2015

Changed in maas:
status: New → Confirmed
importance: Undecided → Critical
milestone: none → 1.9.0
Changed in maas:
assignee: nobody → Gavin Panella (allenap)
Gavin Panella (allenap)
Changed in maas:
status: Confirmed → Triaged
Gavin Panella (allenap)
Changed in maas:
assignee: Gavin Panella (allenap) → nobody
Revision history for this message
Jason Hobbs (jason-hobbs) wrote :

Next time this is hit we need to capture all the logs, and "sudo lsof" output as well.

Revision history for this message
Mike Pontillo (mpontillo) wrote :

We may want to consider adding support for memory profiling in MAAS in order to solve this. (for example, we could use something like python-objgraph, similar to what is described here http://www.huyng.com/posts/python-performance-analysis/)

Revision history for this message
Larry Michel (lmic) wrote :

Here's "sudo lsof" output for today's recreate: https://pastebin.canonical.com/141135/

Logs will follow.

Revision history for this message
Larry Michel (lmic) wrote :

logs including pmap and lsof output.

Revision history for this message
Blake Rouse (blake-rouse) wrote :

Been looking into this the pmap output shows the output below for clusterd:

348: /usr/bin/python /usr/bin/twistd --nodaemon --uid=maas --gid=maas --pidfile= maas-clusterd --config-file=/etc/maas/pserv.yaml
Address Kbytes Mode Offset Device Mapping
0000000000400000 2800 r-x-- 0000000000000000 009:00000 python2.7
00000000008bb000 4 r---- 00000000002bb000 009:00000 python2.7
00000000008bc000 468 rw--- 00000000002bc000 009:00000 python2.7
0000000000931000 72 rw--- 0000000000000000 000:00000 [ anon ]
00000000020a3000 1676184 rw--- 0000000000000000 000:00000 [ anon ]

You can see the last line in that output is using 1.6GB of memory? As for what that is I don't know.

Revision history for this message
Larry Michel (lmic) wrote :

We had a recreate but I forgot to capture sudo lsof output prior to restarting container but here are logs.

Revision history for this message
Larry Michel (lmic) wrote :

logs for latest recreate including maas logs and lsof output:

all-threads.txt
clusterd.log
maas.log
oom-error.log
regiond.log

Revision history for this message
Mike Pontillo (mpontillo) wrote :

The next time this happens, can you kill each python-based MAAS process with SIGUSR2? This should cause the Python thread dumps to be printed to the logs.

Revision history for this message
Mike Pontillo (mpontillo) wrote :

I talked to Jason earlier today and was able to triage the problem in this environment. So far the most interesting thing I've found is:

top - 00:18:16 up 209 days, 2:23, 6 users, load average: 1.79, 1.41, 1.49
Tasks: 65 total, 1 running, 64 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5.0 us, 1.4 sy, 4.4 ni, 88.6 id, 0.5 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 8132784 total, 7536812 used, 595972 free, 44424 buffers
KiB Swap: 1998780 total, 1998780 used, 0 free. 1377860 cached Mem

  PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
  319 maas 32 12 2629420 1.347g 2452 S 16.6 17.4 850:58.16 twistd
  436 maas 32 12 3154524 205504 4352 S 13.0 2.5 364:39.89 twistd
  372 maas 32 12 4520152 1.445g 4200 S 1.0 18.6 1026:51 twistd
  418 maas 32 12 5226772 0.984g 4236 S 0.7 12.7 608:09.07 twistd
  398 maas 32 12 2782140 210808 3952 S 0.3 2.6 167:29.80 twistd

I sent a USR2 signal to this process, and found that it was stuck in the DHCP lease parser:

http://paste.ubuntu.com/13684628/

I'm continuing to triage.

Revision history for this message
Mike Pontillo (mpontillo) wrote :

That may be a red herring; whichever process runs the lease parser ends up taking 99% CPU for awhile. (your lease file is very large)

Revision history for this message
Mike Pontillo (mpontillo) wrote :

This StackOverflow answer seems to match what we are seeing; as always, there is no simple answer:

    http://stackoverflow.com/a/13329386/77939

One proposed solution was to fork() subprocesses early, before the parent process starts using extreme amounts of memory. For example, by forking a daemon early on which will be responsible for running subprocesses:

    https://github.com/greyside/errand-boy

Revision history for this message
Mike Pontillo (mpontillo) wrote :

I think I have narrowed this problem down to a bug in [the interaction with?] python-tx-tftp.

In the memory dump, there are over 20,000 occurrences of the string:

    "/var/lib/maas/boot-resources/current/pxelinux.0"

It's always present in a dictionary with an "awaysCreate" parameter. The alwaysCreate parameter is only present in our stack in the twisted FilePath object, which is used (in conjunction with pxelinux.0) in python-tx-tftp:

https://github.com/shylent/python-tx-tftp/blob/a4d1790a9a46231411fc25cde69f9057999cd115/tftp/backend.py

Looking at the object dump, there are 43,296 instances of a FilesystemReader object.

Here's a list of all the object types in the memory dump and their counts.

http://paste.ubuntu.com/13765380/

Revision history for this message
Jason Hobbs (jason-hobbs) wrote :

Cool - it would be interesting to see if there is a leak that's reproducible by dumping those object counts after tftp interactions. Like, check object counts, deploy a node, check object counts again and see if they increase. If so, it should be possible to debug this issue outside of OIL.

Revision history for this message
Larry Michel (lmic) wrote :

We had a recreate this morning. Logs attached.

Revision history for this message
Larry Michel (lmic) wrote :

Looks like another recreate

ubuntu@maas-trusty-aug-2015:~$ grep memory /var/log/maas/maas.log
Jan 7 04:33:37 maas-trusty-aug-2015 maas.service_monitor: [ERROR] While monitoring service 'maas-dhcpd6' an error was encountered: [Errno 12] Cannot allocate memory
Jan 7 04:33:37 maas-trusty-aug-2015 maas.service_monitor: [ERROR] While monitoring service 'tgt' an error was encountered: [Errno 12] Cannot allocate memory

and in clusterd.log, a bunch of these:

2016-01-06 20:34:38+0000 [-] hayward-54: Power could not be turned off.
        Traceback (most recent call last):
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 423, in errback
            self._startRunCallbacks(fail)
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1155, in gotResult
            _inlineCallbacks(r, g, deferred)
        --- <exception caught here> ---
          File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1097, in _inlineCallbacks
            result = result.throwExceptionIntoGenerator(g)
          File "/usr/lib/python2.7/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
            return g.throw(self.type, self.value, self.tb)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/rpc/power.py", line 273, in change_power_state
            'query', context)
          File "/usr/lib/python2.7/dist-packages/twisted/python/threadpool.py", line 191, in _worker
            result = context.call(ctx, function, *args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
            return self.currentContext().callWithContext(ctx, func, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
            return func(*args,**kw)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/utils/twisted.py", line 154, in wrapper
            return func(*args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/rpc/power.py", line 126, in perform_power_change
            return action.execute(power_change=power_change, **context)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/power/poweraction.py", line 142, in execute
            return self.run_shell(rendered)
          File "/usr/lib/python2.7/dist-packages/provisioningserver/power/poweraction.py", line 120, in run_shell
            stderr=subprocess.STDOUT, close_fds=True)
          File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
            errread, errwrite)
          File "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child
            self.pid = os.fork()
        exceptions.OSError: [Errno 12] Cannot allocate memory

Christian Reis (kiko)
Changed in maas:
importance: Critical → High
milestone: 1.9.0 → 1.9.1
Changed in maas:
milestone: 1.9.1 → 1.9.2
Changed in maas:
milestone: 1.9.2 → 1.9.3
Changed in maas:
milestone: 1.9.3 → 1.9.4
Changed in maas:
milestone: 1.9.4 → 1.9.5
summary: - Failed to refresh power state and unhandled errors -
- exceptions.OSError: [Errno 12] Cannot allocate memory
+ [1.x,2.x] memory leak in python-tx-tftp integration
Revision history for this message
Andres Rodriguez (andreserl) wrote :

We believe this is no longer an issue in the latest releases of MAAS. Please upgrade to the latest version of MAAS, and If you believe this issue is still present, please re-open this bug report or file a new one.

Changed in maas:
status: Triaged → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.