Re: libssh2_sftp_read reads a number of bytes smaller than both the file size and the specified buffer size
On 4/3/12, Daniel Stenberg <daniel_at_haxx.se> wrote:
> On Tue, 3 Apr 2012, Adam Craig wrote:
>> That still does not explain why the file downloads in 2K chunks, but that
>> might have something to do with the server. I will try a different
>> server-side program as soon as I can set up one.
> Why is that an issue at all? That's just a detail of the libssh2
> implementation and nothing that your application should worry about, right?
> see src/sftp.h:
> #define MAX_SFTP_READ_SIZE 2000
> 2K is not a magic number in any way, but I did a series of SFTP transfer
> experiments with high latency high bandwidth transfers and I found this to
> a decent size. You're free to experiment with it and tell us if you get
> different results. I wouldn't mind changing it if we find a better value, or
> somehow make it changable if we deem that to be a good idea.
> As for documentation, now that you've learned how the functions work, please
> feel free to improve them and send us patches of the improvements.
Thank you. That resolves my original question. I suppose most users of
the API would not need to worry about chunk size. In my case, speed is
a factor, and I wanted to see how downloading in larger chunks would
affect performance. My understanding is that using larger packets and
thus uploading or downloading in fewer calls can be faster up to a
point but that it also increases the cost of lost packets and
increases the chance that some of the data in a given packet will
become corrupted in transit. I will try some other sizes and see.
> / daniel.haxx.se
> libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel
Received on 2012-04-04