On 02/24/2012 03:04 PM, Daniel Stenberg wrote:
> On Thu, 23 Feb 2012, Steven Dake wrote:
>> If I run my application for 8 hours, the leak adds up to 300k of
>> memory consumed. I had a look into the code for several hours and
>> don't see an immediate way to fix the problem. I'm not sure even what
>> the problem is, as the channel should free all packets on
> I'm curious if the leak remains if your app exits? I mean, could it be
> packets that are received and are just appended in the incoming package
> queue and then are never discarded from there?
If my app exits, libssh2 is no longer being consumed by my application
and the leak is collected by the operating system.
My app uses libssh2, and connects to a sshd daemon on remote virtual
It sends various commands every 10 seconds to check health of
systemctl status httpd
systemctl start httpd
systemctl stop httpd
The processes purpose is to maintain high availability of the VMs, so in
a best case scenario it would run for hundreds+ of days.
The problem is compounded by the fact that I intend for thousands of
these processes to be run on one machine (so in the case of 1000 VMs,
thats 300k*1000 leaked per 8 hours).
http://www.pacemaker-cloud.org to understand the use case
> 300K during 8 hours seems like a fairly small amount of memory per
> packet (assuming you have traffic semi-often on the connection) which
> would imply that the leak doesn't actually occur that often.
yes data per packet seems small but adds up over time
> Let me point out that the ->payload allocated pointed is normally passed
> in to _libssh2_packet_add() where the packet is added to the incoming
> queue and it is subsequently freed when that packet has been handled.
Perhaps a packet is coming back that is not handled (and thus not freed).
Received on 2012-02-25