Re: libssh2_sftp_read reads a number of bytes smaller than both the file size and the specified buffer size
The attached spreadsheet summarizes a series of download tests made
using my Cocoa App that uses libssh2_sftp_read. I modified the program
to minimize the number of concurrent processes running during the
download and the number of application-specific lines of code running
during the download loop. My app ordinarily generates, uploads, and
downloads long strings of files. For these tests, I had it download
the same set of files each time. I used two servers for the tests: one
remote server in Santa Clara and one server on my local network here
in Los Angeles. The remote server became unavailable at the beginning
of this week, so I was not able to run as many tests using it. To
change the maximum size of packet receivable, I set #define
MAX_SFTP_READ_SIZE 325000 in sftp.h and passed in the max chunk size
in the table in bytes as the buffer size argument to
I found that, with the remote server, while larger packet sizes
yielded some improvement in speed, it was only barely noticeable
compared to the fluctuations in speed due to outside factors. On the
local network, the improvement in speed is pronounced from 2K to 20K
but levels off at higher sizes. I also noticed that, when I tried
downloading in 70K chunks, the return value was 65536, indicating some
other limit on chunk size exists elsewhere in the code or on the
server side. Even so, tracking it down does not seem worthwhile given
that it would not lead to any performance improvements.
I noticed the following comment on the line defining the limit on
upload chunk size:
* MAX_SFTP_OUTGOING_SIZE MUST not be larger than 32500 or so. This is the
* amount of data sent in each FXP_WRITE packet
#define MAX_SFTP_OUTGOING_SIZE 32500
My guess is the "MUST" is due to the following, found in libssh2_priv.h:
/* RFC4253 section 6.1 Maximum Packet Length says:
* "All implementations MUST be able to process packets with
* uncompressed payload length of 32768 bytes or less and
* total packet size of 35000 bytes or less (including length,
* padding length, payload, padding, and MAC.)."
#define MAX_SSH_PACKET_LEN 35000
That is, the spec only requires that the receiving end be able to
process packets no larger than 35K, so sending packets larger than
that means the receiver might not be able to process them.
I also noticed the following in sftp.c:
/* This is the maximum packet length to accept, as larger than this indicate
some kind of server problem. */
#define LIBSSH2_SFTP_PACKET_MAXLEN 80000
I do not see why 80K is the cutoff here.
If the spec says the max packet size is 35K, then it would make more
sense to make all the hard-coded limits on packet size 35K and let
users limit packet size further through the buffer sizes they pass to
the read and write statements.
This largely resolves my question. My current plan for my app is to
set both upload and download packet sizes at around 32K. If the
program still does not download files fast enough for my purposes, I
will write a custom server side that caches frequently-requested files
in program memory, so that it can serve them without loading them from
the file system.
On 4/4/12, Peter Stuge <peter_at_stuge.se> wrote:
> Adam Craig wrote:
>> My understanding is that using larger packets and thus uploading or
>> downloading in fewer calls can be faster up to a point
> SFTP is fairly low-level. Consider that across the internet you
> rarely have larger MTUs than 1500 bytes.
>> I will try some other sizes and see.
> Looking forward to your results! Will you try both short and long
> libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel
Received on 2012-04-20