Anybody has thoughts on this? I also faced this issue[
I think we should have solution on this as Joern suggested.
On Fri, Aug 29, 2014 at 2:15 PM, Joern Heissler <libssh2_at_wulf.eu.org> wrote:
> I'm trying to download a large text file using the sftp protocol.
> The remote server runs on "Maverick SSHD". I'm using libssh2-1.4.3 (debian
> I enabled compression and negotiated zlib because it's a text file.
> Next, I compared the speed to what OpenSSH's `sftp' utility achieves, and
> libssh2 was just terribly slow.
> Then I increased buffer size for libssh2_sftp_read to a big value. It
> helps a little, but the chunks returned by libssh2_sftp_read are exactly
> bytes, regardless of my setting.
> tcpdump shows that the packets sent by the server are mostly around
> 200-300 bytes which obviously is too small.
> I found that when I change MAX_SFTP_READ_SIZE from 2000 to a larger
> value, the packet size increases, as does the download speed.
> To me it looks like the server has strange TCP_NODELAY / TCP_CORK
> settings. For each request of 2000 bytes, the data is gzipped and gets
> sent in
> one tcp packet (or multiple if too large).
> I found that a chunk size of 13500 bytes gives me a good ratio of
> uncompressed_bytes / tcp_packets.
> The optimal value for MAX_SFTP_READ_SIZE heavily depends on the specific
> use case, so I ask that it's made a configurable option, please :)
> libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel
Received on 2014-09-13