-
Notifications
You must be signed in to change notification settings - Fork 88
Description
Distribution
Linux Mint 22.1
Package version
1.8.8
Frequency
Always
Bug description
I get around 183Megabyte/s transfer speed. Which is 1.46 Gbit/s. However, with a 10Gbit/s network I would assume to reach higher speeds :).
Steps to reproduce
- Use a 10Gbit/s switch (or direct link between two computers, if you do not have such a switch)
- Use 10GbE network cards on both sides
- Use latest Warpinator (v1.8.8)
- Disable compression
- Transfer a large file (eg. 4Gigabyte file or bigger)
- Notice the network speed limits of Warpinator
Expected behavior
Reading close to the iperf3 results or at least match the Filezilla results. Filezilla gives me 415 MByte/s = 3.32 Gbit/s.
Internal iperf3
test between the two computers:
iperf3 -c 192.168.1.204 -p 5201
Connecting to host 192.168.1.204, port 5201
[ 5] local 192.168.1.217 port 41108 connected to 192.168.1.204 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.08 GBytes 9.24 Gbits/sec 0 1.59 MBytes
[ 5] 1.00-2.00 sec 1.07 GBytes 9.21 Gbits/sec 0 1.66 MBytes
[ 5] 2.00-3.00 sec 1.07 GBytes 9.23 Gbits/sec 0 1.82 MBytes
[ 5] 3.00-4.00 sec 1.07 GBytes 9.22 Gbits/sec 0 1.82 MBytes
[ 5] 4.00-5.00 sec 1.07 GBytes 9.21 Gbits/sec 0 1.82 MBytes
[ 5] 5.00-6.00 sec 1.07 GBytes 9.23 Gbits/sec 0 1.82 MBytes
[ 5] 6.00-7.00 sec 1.07 GBytes 9.21 Gbits/sec 0 1.82 MBytes
[ 5] 7.00-8.00 sec 1.07 GBytes 9.20 Gbits/sec 0 1.82 MBytes
[ 5] 8.00-9.00 sec 1.07 GBytes 9.21 Gbits/sec 0 1.82 MBytes
[ 5] 9.00-10.00 sec 1.07 GBytes 9.22 Gbits/sec 0 1.82 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.7 GBytes 9.23 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 10.7 GBytes 9.22 Gbits/sec receiver
iperf Done.
Iperf3 confirms 10Gbit/s connection. And Filezilla allows me to transfer around 3.32 Gbit/s. SFP might not be the best way of transferring files after all, since SFTP has SSH overhead.
Additional information
Request for help: Can somebody else also try to do some testings on their network? Ideally on 10GbE networks to see where the bottlenecks / limits are of Warpinator.
Some ideas:
- Python bottleneck?
- CPU bottleneck?
- Disk bottlenck?
- TCP/MTU limits (I doubt it)?
With compression it gets really slow (so I would not recommended that option for sure):