Subject: Re: sftp_read retuning 0 prematurely

Re: sftp_read retuning 0 prematurely

From: Daniel Stenberg <>
Date: Sun, 4 Sep 2011 16:56:32 +0200 (CEST)

On Sun, 4 Sep 2011, Alexander Lamaison wrote:

Note that you managed to post to the old list, so I'm now replying to both the
old and the current one.

> I've been having a problem where sftp_read return 0 (i.e. thinks its reached
> the end of the file) even though it has only partly transferred it. This is
> particularly noticeable when the 'file' is something like /dev/random that
> renders data very slowly. Earlier versions used to block in this case and
> wait for data. git bisect identifies this commit as the culprit:

Are you sure that really is the culprit? I mean, does it truly get back to
working if you revert that change?

I just don't see how that change is related to the problem you describe...

> Daniel, can you run me through the logic for this change? For instance,
> what happens when (buffer_size*4) is <= already? Are we supposed to modify
> count here too? And, I'm feeling a bit dense right now: why is it buffer*4?


First, buffer_size*4 is just picked more or less out of the air. The idea is
that when reading SFTP from a remote server, we send away multiple read
requests guessing that the client will read more than only this 'buffer_size'
amount of memory. So we ask for maximum buffer_size*4 amount of data so that
we can return them very fast in subsequent calls. (Feel free to experiment to
set it *2 or *6 or whatever.)

The particular change you're referring to was made because if you call
sftp_read() in a sequence like:

  1. sftp_read(VERY_BIG_BUFFER);
  2. sftp_read(VERY_SMALL_BUFFER);

... before this change, the 'count' variable could get wrapped in the second
call (as 'already' was then larger than buffer_size*4) and since count is
unsigned, it would turn into an insanly high value and just be very very

Received on 2011-09-04