~76 kB memory leak per client instance

Bug #1419620 reported by Michi Henning
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical System Image
Fix Released
High
Unassigned
net-cpp
Fix Released
Critical
Thomas Voß
net-cpp (Ubuntu)
Fix Released
Undecided
Unassigned

Bug Description

net-cpp leaks memory for each http client instance. To reproduce:

#include <core/net/http/client.h>

int main()
{
    core::net::http::make_client();
}

==22658==
==22658== HEAP SUMMARY:
==22658== in use at exit: 78,572 bytes in 1,040 blocks
==22658== total heap usage: 13,600 allocs, 12,560 frees, 446,676 bytes allocated
==22658==
...
==22658== LEAK SUMMARY:
==22658== definitely lost: 512 bytes in 1 blocks
==22658== indirectly lost: 77,996 bytes in 1,037 blocks
==22658== possibly lost: 0 bytes in 0 blocks
==22658== still reachable: 64 bytes in 2 blocks

Dropping a bit of trace into the library shows that the Handle::Private struct is never destroyed:

multi::Handle::Private::Private()
    : handle(multi::native::init()),
      keep_alive(dispatcher),
      timeout(dispatcher)
{
    std::cerr << "initializing" << std::endl;
}

multi::Handle::Private::~Private()
{
    std::cerr << "cleaning up" << std::endl;
    multi::native::cleanup(handle);
}

This prints "initializing", but doesn't print "cleaning up".

Checking the use count of the multi::Handle member of http::Client in the Client destructor shows that the use count is 2 when the Client goes out of scope.

The leak is caused here:

multi::Handle::Handle() : d(new Private())
{
    auto holder = new Holder<Private>{d};

    set_option(Option::socket_function, Private::socket_callback);
    set_option(Option::socket_data, holder);
    set_option(Option::timer_function, Private::timer_callback);
    set_option(Option::timer_data, holder);

    set_option(Option::pipelining, easy::enable);
}

The Holder that is allocated here is never deleted and points at the Private a second time.

Related branches

Changed in canonical-devices-system-image:
importance: Undecided → High
milestone: none → ww13-ota
status: New → In Progress
Changed in net-cpp:
importance: Undecided → Critical
assignee: nobody → Thomas Voß (thomas-voss)
status: New → In Progress
Revision history for this message
Thomas Voß (thomas-voss) wrote :
Download full text (5.2 KiB)

With the MP attached to this bug, the leaks are gone, except for a spurious one on shutdown that can be attributed to CURL. Quoting from the original MP discussion here:

Running the load test in a loop, after maybe a hundred iterations or so, I got a valgrind complaint:

==29504==
==29504== HEAP SUMMARY:
==29504== in use at exit: 161,360 bytes in 1,769 blocks
==29504== total heap usage: 325,601 allocs, 323,832 frees, 54,917,164 bytes allocated
==29504==
==29504== 161,296 (73 direct, 161,223 indirect) bytes in 1 blocks are definitely lost in loss record 64 of 64
==29504== at 0x4C2B100: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==29504== by 0x4E8A699: boost::asio::detail::thread_info_base::allocate(boost::asio::detail::thread_info_base*, unsigned long) (thread_info_base.hpp:60)
==29504== by 0x4E8A7A8: boost::asio::asio_handler_allocate(unsigned long, ...) (handler_alloc_hook.ipp:50)
==29504== by 0x4E883DE: void* boost_asio_handler_alloc_helpers::allocate<curl::multi::Handle::Private::Timeout::Private::async_wait_for(std::weak_ptr<curl::multi::Handle::Private> const&, std::chrono::duration<long, std::ratio<1l, 1000l> > const&)::{lambda(boost::system::error_code const&)#1}>(unsigned long, curl::multi::Handle::Private::Timeout::Private::async_wait_for(std::weak_ptr<curl::multi::Handle::Private> const&, std::chrono::duration<long, std::ratio<1l, 1000l> > const&)::{lambda(boost::system::error_code const&)#1}&) (handler_alloc_helpers.hpp:37)
==29504== by 0x4E87F56: void boost::asio::detail::deadline_timer_service<boost::asio::time_traits<boost::posix_time::ptime> >::async_wait<curl::multi::Handle::Private::Timeout::Private::async_wait_for(std::weak_ptr<curl::multi::Handle::Private> const&, std::chrono::duration<long, std::ratio<1l, 1000l> > const&)::{lambda(boost::system::error_code const&)#1}>(boost::asio::detail::deadline_timer_service<boost::asio::time_traits<boost::posix_time::ptime> >::implementation_type&, curl::multi::Handle::Private::Timeout::Private::async_wait_for(std::weak_ptr<curl::multi::Handle::Private> const&, std::chrono::duration<long, std::ratio<1l, 1000l> > const&)::{lambda(boost::system::error_code const&)#1}&) (deadline_timer_service.hpp:185)
==29504== by 0x4E87CA1: boost::asio::async_result<boost::asio::handler_type<curl::multi::Handle::Private::Timeout::Private::async_wait_for(std::weak_ptr<curl::multi::Handle::Private> const&, std::chrono::duration<long, std::ratio<1l, 1000l> > const&)::{lambda(boost::system::error_code const&)#1}, void (boost::system::error_code)>::type>::type boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traits<boost::posix_time::ptime> >::async_wait<curl::multi::Handle::Private::Timeout::Private::async_wait_for(std::weak_ptr<curl::multi::Handle::Private> const&, std::chrono::duration<long, std::ratio<1l, 1000l> > const&)::{lambda(boost::system::error_code const&)#1}>(boost::asio::detail::deadline_timer_service<boost::asio::time_traits<boost::posix_time::ptime> >::implementation_type&, boost::asio::handler_type&&) (deadline_timer_service.hpp:149)
==29504== by 0x4E87B93: boost::asio::async_result<boost::...

Read more...

Changed in canonical-devices-system-image:
status: In Progress → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package net-cpp - 1.1.0+15.04.20150305-0ubuntu1

---------------
net-cpp (1.1.0+15.04.20150305-0ubuntu1) vivid; urgency=medium

  [ thomas-voss ]
  * Make sure that Multi::Private instances are correctly cleaned up by
    only handing out weak_ptr's to it. (LP: #1419620, #1423765)
 -- CI Train Bot <email address hidden> Thu, 05 Mar 2015 12:08:09 +0000

Changed in net-cpp (Ubuntu):
status: New → Fix Released
Changed in net-cpp:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.