Bandwith issue with Store-1-S (Dell R210-II) under ESXi

#1

Hi all,

we recently experienced a support problem with our scaleway dedicated server.

So we’ve a dediebox with an ESXI 6.0 server with two vms.

The bandwith is limited actually at 4MO/s.

I’ve just transform my server to be an iperf3 server to test the bandwidth our badwidth ;-).

Could you please send me your results :
iperf3 -c epis-strasbourg.eu -R -p 5201

They certified the bandwith level to 1Gbit/s our 150Mbit/s minimal.

In the reality : I test this for about 4 differents providers and I ever experienced this result :

root@debianserver:~# iperf3 -c epis-strasbourg.eu -R -p 5201
Connecting to host epis-strasbourg.eu, port 5201
Reverse mode, remote host epis-strasbourg.eu is sending
[ 4] local 192.168.151.205 port 51800 connected to 212.83.135.25 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 8.12 MBytes 68.1 Mbits/sec
[ 4] 1.00-2.00 sec 3.19 MBytes 26.8 Mbits/sec
[ 4] 2.00-3.00 sec 4.94 MBytes 41.4 Mbits/sec
[ 4] 3.00-4.00 sec 4.46 MBytes 37.4 Mbits/sec
[ 4] 4.00-5.00 sec 5.31 MBytes 44.5 Mbits/sec
[ 4] 5.00-6.00 sec 5.47 MBytes 45.8 Mbits/sec
[ 4] 6.00-7.00 sec 5.31 MBytes 44.5 Mbits/sec
[ 4] 7.00-8.00 sec 5.92 MBytes 49.7 Mbits/sec
[ 4] 8.00-9.00 sec 4.95 MBytes 41.6 Mbits/sec
[ 4] 9.00-10.00 sec 5.12 MBytes 42.9 Mbits/sec


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 54.4 MBytes 45.7 Mbits/sec 288 sender
[ 4] 0.00-10.00 sec 52.9 MBytes 44.4 Mbits/sec receiver

iperf Done.

Thank’s a lot for your feedbacks,

Philippe

#2

Here are the result from another connexion :slight_smile:
root@ubuntutest:~# iperf3 -c epis-strasbourg.eu -R -p 5201
Connecting to host epis-strasbourg.eu, port 5201
Reverse mode, remote host epis-strasbourg.eu is sending
[ 4] local 192.168.151.205 port 51818 connected to 212.83.135.25 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 7.05 MBytes 59.2 Mbits/sec
[ 4] 1.00-2.00 sec 3.62 MBytes 30.4 Mbits/sec
[ 4] 2.00-3.00 sec 1.98 MBytes 16.6 Mbits/sec
[ 4] 3.00-4.00 sec 2.97 MBytes 24.9 Mbits/sec
[ 4] 4.00-5.00 sec 3.45 MBytes 29.0 Mbits/sec
[ 4] 5.00-6.00 sec 3.29 MBytes 27.6 Mbits/sec
[ 4] 6.00-7.00 sec 3.20 MBytes 26.8 Mbits/sec
[ 4] 7.00-8.00 sec 2.24 MBytes 18.8 Mbits/sec
[ 4] 8.00-9.00 sec 2.51 MBytes 21.1 Mbits/sec
[ 4] 9.00-10.00 sec 2.63 MBytes 22.1 Mbits/sec


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 33.6 MBytes 28.2 Mbits/sec 292 sender
[ 4] 0.00-10.00 sec 33.1 MBytes 27.7 Mbits/sec receiver

iperf Done.

Really bad as ever.

#3

From Windows …

C:\Users\philippelogel\Desktop\iperf-3.1.3-win64>iperf3.exe -c epis-strasbourg.eu -R -p 5201
Connecting to host epis-strasbourg.eu, port 5201
Reverse mode, remote host epis-strasbourg.eu is sending
[ 4] local 192.168.151.90 port 51072 connected to 212.83.135.25 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 6.54 MBytes 54.7 Mbits/sec
[ 4] 1.00-2.00 sec 3.49 MBytes 29.3 Mbits/sec
[ 4] 2.00-3.00 sec 3.96 MBytes 33.2 Mbits/sec
[ 4] 3.00-4.00 sec 3.59 MBytes 30.1 Mbits/sec
[ 4] 4.00-5.00 sec 3.80 MBytes 32.0 Mbits/sec
[ 4] 5.00-6.00 sec 2.90 MBytes 24.3 Mbits/sec
[ 4] 6.00-7.00 sec 2.12 MBytes 17.8 Mbits/sec
[ 4] 7.00-8.00 sec 3.13 MBytes 26.2 Mbits/sec
[ 4] 8.00-9.00 sec 2.74 MBytes 22.9 Mbits/sec
[ 4] 9.00-10.00 sec 4.21 MBytes 35.3 Mbits/sec


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 37.6 MBytes 31.5 Mbits/sec 368 sender
[ 4] 0.00-10.00 sec 36.6 MBytes 30.7 Mbits/sec receiver

iperf Done.

#4

Hello @episstras,

Indeed there seems to be an issue with the available bandwidth for your Dedibox. Could you please open a ticket so our technical assistance can have a look at what is causing the issue. Please provide them the iperf tests you made in the ticket.

Benedikt

#5

It’s done for a while, but I can’t have any feedbacks that can help me to solve this bad issue.

#6

This is my server (dedicated server Start-2-L with network is 300Mbit guaranteed as i remember)
that was trying to download a test file:
wget -4 -O /dev/null http://mirror.nl.leaseweb.net/speedtest/10000mb.bin
–2020-05-13 10:21:38-- http://…10000mb.bin
Resolving … … 5.79.108.33
Connecting to … |5.79.108.33|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 10000000000 (9.3G) [application/octet-stream]
Saving to: ‘/dev/null’

/dev/null 0%[ ] 2.07M 446KB/s eta 7h 8m

Just only 446kb/s.
One more testing:
Connecting to host ping.online.net, port 5202
[ 4] local 195.154.104.76 port 48962 connected to 62.210.18.40 port 5202
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 9.85 MBytes 82.6 Mbits/sec 0 331 KBytes
[ 4] 1.00-2.00 sec 9.27 MBytes 77.7 Mbits/sec 0 331 KBytes
[ 4] 2.00-3.00 sec 6.84 MBytes 57.4 Mbits/sec 2 331 KBytes
[ 4] 3.00-4.00 sec 7.58 MBytes 63.6 Mbits/sec 3 331 KBytes
[ 4] 4.00-5.00 sec 7.67 MBytes 64.3 Mbits/sec 1 331 KBytes
[ 4] 5.00-6.00 sec 9.02 MBytes 75.6 Mbits/sec 0 331 KBytes
[ 4] 6.00-7.00 sec 8.82 MBytes 74.0 Mbits/sec 2 331 KBytes
[ 4] 7.00-8.00 sec 7.52 MBytes 63.1 Mbits/sec 0 331 KBytes
[ 4] 8.00-9.00 sec 7.46 MBytes 62.6 Mbits/sec 1 331 KBytes
[ 4] 9.00-10.00 sec 9.03 MBytes 75.7 Mbits/sec 1 331 KBytes


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 83.1 MBytes 69.7 Mbits/sec 10 sender
[ 4] 0.00-10.00 sec 83.0 MBytes 69.6 Mbits/sec receiver

I had complain about network 3 times, and they all told me to reboot with rescue mode to check with the command line like you above, don’t have any other solutions. haha. They also said my network bandwidth is 1Gbit ??? Maybe i’ve to restart my server everyday to keep the network speed as i expected???

#7

I’ve just sent this post to the scaleway support team :

I’ve an ESXI 6.0 version on a dedicated server.

I’ve just test the connexion on the rescue solution … but there isn’t any ESXI rescue iso in the popup menu, so you proposed to me to compare different OS to concluded my OS is misconfigured.

  1. I’ve noticed that the server they rented to me, is certified with ESXI 5.5 max and not for 6.0 version.
    https://www.dell.com/support/home/fr-fr/product-support/product/poweredge-r210-2/drivers

On vmware website too :
https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=server&productid=20967&vcl=true

I look on internet and find this post :

It’s only an internet post but : the R210 poweredge is unsupported with esxi 6.0 and over ???

But strangely the solution is always rented on scaleway website :
https://www.scaleway.com/fr/dedibox/store/store-1-s/

And the supported OS are :

  •   ESXi 5.5.0
    
  •   ESXi 5.5.0
    
  •   ESXi 6.0.0
    
  •   ESXi 6.0.0
    
  •   ESXi 6.0.0 U1
    
  •   ESXi 6.0.0
    
  •   ESXi 6.5.0d
    

But only 5.5 is really supported.

With my technicians (we always look on the Dell website and vmware to be sure the drivers will work correctly with the hardware).

  1. Most of my friends told me that you could care of scaleway.

I’ve made the mistake to care on there solutions and …

  1. There two activated nics on this server …

https://postimg.cc/wtGzB0Kx

And they’re binded together … there’s an agregation with the two cards, but only one is connected to switch ??? It was preconfigured with the scaleway iso.

Maybe the problem came from this point … I usually used to make an lacp on the switch for aggregraing the two port (i manage most of the time : Procurve, Brocade et Cisco switches). I don’t understand what they had done.

Last the ESXI 6.0 iso image came from Scaleway preconfigured … but unfonctionnal.

  1. They said to me that the problem came from my vms, but I’ve tested the bandwidth from the HV itself, the software they proposed on their website.

I had to use iperf rather than iperf3, cause of ESXI.

My download speed is 8 times slower than the upload exactly like my users has noticed on the vms.

So the problem came from the HV and the scaleway ISO files.

So I wrote, you proposed to us :

  • non certified architecture with the OS.
  • tests that can’t be really compared to the reality (my OS is an ESXI and the rescue solution something else)
  • last the bandwidth on the rescue server is lower than the bandwidth they offer on the website

Could you please put an esxi 6.0 rescue iso to compare the results ?

Thank’s to you

Philippe

#8

Hi!

I’ve just finished to read all your messages with our support team.
I will try to respond to all your issues, once at the time.

Question 1 - ESXi 6 for Dell R210-II

Yes, you are right. Your server range (Dell R210-II) are not certified for ESXi above (and including) 6.0.
You can find the compatibility matrix here.
ESXi 6 has been added to most servers range by request of the community (even if they are not certificated).

Also, we test all operating systems on offers before making them available to the public and it seems that your offer was not tested. Therefore, ESXi5.5 is not available for you. You could enable it if you want.

Nevertheless, the OS list available to install seems wrong, maybe a display issue on our side. (We will dig this point).

Question 2

I do not understand what you are trying to say. Can you rephrase it so I can help you? :slight_smile:

Question 3 - Two NICs on the server but only one avaiable

Yes, the offer of your server do not provide two NICs but only one.
The binding you see on ESXi interface is made by the ESXi installer itself.
No LACP or any bonding technology is set up on the switch side.

Also, there is nothing like “Scaleway ISO”, we are using official VmWare ESXi ISO to install your server.
The configuration is only made through a kickstart file sent during the first boot.
For example, here is a part of the configuration taken from your last installation:

vmaccepteula
rootpw $CRYPTED
install --firstdisk --overwritevmfs
network --bootproto=static --device=SECRET --ip=SECRET --gateway=SECRET --nameserver=SECRET --netmask=255.255.255.0 --hostname=SECRET
keyboard French
reboot

As you can see, nothing is specified to create the bonding.

Question 4 - Network performance issue

To check these numbers you sent us, I took a random R210-II server with Ubuntu 18.04.
Here is the results against bouygues.iperf.fr

root@62-210-180-247:~# iperf3 -c bouygues.iperf.fr -p 9200
Connecting to host bouygues.iperf.fr, port 9200
[  4] local 62.210.180.247 port 57920 connected to 89.84.1.222 port 9200
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  90.9 MBytes   762 Mbits/sec   65   33.9 KBytes
[  4]   1.00-2.00   sec  96.4 MBytes   808 Mbits/sec   53    178 KBytes
[  4]   2.00-3.00   sec  95.0 MBytes   797 Mbits/sec   60    107 KBytes
[  4]   3.00-4.00   sec  97.5 MBytes   818 Mbits/sec  126    110 KBytes
[  4]   4.00-5.00   sec   106 MBytes   887 Mbits/sec   55    141 KBytes
[  4]   5.00-6.00   sec  97.0 MBytes   814 Mbits/sec   60    115 KBytes
[  4]   6.00-7.00   sec   106 MBytes   886 Mbits/sec   34    184 KBytes
[  4]   7.00-8.00   sec   109 MBytes   911 Mbits/sec   37    150 KBytes
[  4]   8.00-9.00   sec   103 MBytes   865 Mbits/sec   56    165 KBytes
[  4]   9.00-10.00  sec   107 MBytes   899 Mbits/sec   19    356 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1007 MBytes   845 Mbits/sec  565             sender
[  4]   0.00-10.00  sec  1005 MBytes   843 Mbits/sec                  receiver

iperf Done.
root@62-210-180-247:~# iperf3 -c bouygues.iperf.fr -p 9201 -R
Connecting to host bouygues.iperf.fr, port 9201
Reverse mode, remote host bouygues.iperf.fr is sending
[  4] local 62.210.180.247 port 51920 connected to 89.84.1.222 port 9201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   105 MBytes   881 Mbits/sec
[  4]   1.00-2.00   sec   110 MBytes   923 Mbits/sec
[  4]   2.00-3.00   sec   110 MBytes   927 Mbits/sec
[  4]   3.00-4.00   sec   109 MBytes   916 Mbits/sec
[  4]   4.00-5.00   sec   110 MBytes   920 Mbits/sec
[  4]   5.00-6.00   sec   109 MBytes   919 Mbits/sec
[  4]   6.00-7.00   sec   109 MBytes   916 Mbits/sec
[  4]   7.00-8.00   sec   108 MBytes   904 Mbits/sec
[  4]   8.00-9.00   sec   109 MBytes   918 Mbits/sec
[  4]   9.00-10.00  sec   109 MBytes   918 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.07 GBytes   915 Mbits/sec  270             sender
[  4]   0.00-10.00  sec  1.06 GBytes   914 Mbits/sec                  receiver

iperf Done.

It reports no bandwidth issue.
I do understand that your server is under ESXi and you only have iperf2 but you should try to boot under rescue to perform the same test as I with iperf3. iperf2 can report unreliable results.
Also, if you send us iperf results, please us our iperf server iperf.online.net or the Bouygues one bouygues.iperf.fr so we can compare the same things :slight_smile:

From what I say, you might have an issue either with your server (something like a defective network card) or with the configuration of your virtual machines.
Do not hesitate to add more details so we can find out the real issue :slight_smile:

Question 5 - ESXi rescue

I do not think there is such a thing as an ESXi rescue.
The image VmWare is providing is not a live image but an installation one.
Did you find anything close to an ESXi rescue that could help debug them? (We could add this)

Also

Your message has been posted under the Scaleway topic but is in fact only linked to Dedibox.
Please use this topic next time you want to share something about these baremetal servers :slight_smile:

I hoped I helped you (at least a little),

henyxia

#9

Thank’s for your reply,

when I put the server in rescue mode and mount Ubuntu 18.04, I get this results :
sd- 53315@62-210-136-52:~$ iperf3 -c ping.online.net -R -p 5209
Connecting to host ping.online.net, port 5209
Reverse mode, remote host ping.online.net is sending
[ 4] local 62.210.136.52 port 60478 connected to 62.210.18.40 port 5209
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 61.8 MBytes 518 Mbits/sec
[ 4] 1.00-2.00 sec 63.3 MBytes 531 Mbits/sec
[ 4] 2.00-3.00 sec 62.2 MBytes 522 Mbits/sec
[ 4] 3.00-4.00 sec 51.9 MBytes 435 Mbits/sec
[ 4] 4.00-5.00 sec 65.7 MBytes 552 Mbits/sec
[ 4] 5.00-6.00 sec 63.4 MBytes 532 Mbits/sec
[ 4] 6.00-7.00 sec 61.7 MBytes 518 Mbits/sec
[ 4] 7.00-8.00 sec 69.3 MBytes 581 Mbits/sec
[ 4] 8.00-9.00 sec 67.0 MBytes 562 Mbits/sec
[ 4] 9.00-10.00 sec 64.6 MBytes 542 Mbits/sec


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 641 MBytes 537 Mbits/sec 185 sender
[ 4] 0.00-10.00 sec 633 MBytes 531 Mbits/sec receiver

iperf Done.

sd-53315@62-210-136-52:~$ iperf3 -c ping.online.net
Connecting to host ping.online.net, port 5201
[ 4] local 62.210.136.52 port 51096 connected to 62.210.18.40 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 51.1 MBytes 429 Mbits/sec 700 38.2 KBytes
[ 4] 1.00-2.00 sec 54.2 MBytes 455 Mbits/sec 557 18.4 KBytes
[ 4] 2.00-3.00 sec 55.7 MBytes 467 Mbits/sec 586 43.8 KBytes
[ 4] 3.00-4.00 sec 54.4 MBytes 456 Mbits/sec 720 178 KBytes
[ 4] 4.00-5.00 sec 54.4 MBytes 456 Mbits/sec 682 43.8 KBytes
[ 4] 5.00-6.00 sec 59.4 MBytes 498 Mbits/sec 848 82.0 KBytes
[ 4] 6.00-7.00 sec 58.9 MBytes 494 Mbits/sec 620 77.8 KBytes
[ 4] 7.00-8.00 sec 50.6 MBytes 424 Mbits/sec 630 28.3 KBytes
[ 4] 8.00-9.00 sec 51.2 MBytes 430 Mbits/sec 451 31.1 KBytes
[ 4] 9.00-10.00 sec 53.5 MBytes 449 Mbits/sec 810 58.0 KBytes


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 543 MBytes 456 Mbits/sec 6604 sender
[ 4] 0.00-10.00 sec 542 MBytes 455 Mbits/sec receiver

iperf Done.

#10

So the result are good from the rescue server, the network is working fine.

  1. When I put the esxi in iperf -s mode I get this results from home

root@ubuntutest:~# iperf -c 62.210.136.52

Client connecting to 62.210.136.52, TCP port 5001
TCP window size: 85.0 KByte (default)

[ 3] local 192.168.151.205 port 59536 connected with 62.210.136.52 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 142 MBytes 119 Mbits/sec

  1. And inside esxi when I contact bougues I get this

[root@sd-53315:~] /usr/lib/vmware/vsan/bin/iperfcopy -c bouygues.iperf.fr

Client connecting to bouygues.iperf.fr, TCP port 5001
TCP window size: 32.5 KByte (default)

[ 3] local 62.210.136.52 port 46207 connected with 89.84.1.222 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 0.00 (+?
4s 52912987413596408 Bytes/sec

Normal it’s pbytes …

So the card seems to be OK, I think so.

But when I try to download a file from the esxi partition (not from the vm) with sftp protocol, I get a poor 1,5MO/s level, and when I try to put a file on the server I get a 7 MO/s ???

Do you have any idea.

#11

Another test with scp only on the esxi server not the vm.

(base) iMac-Papa:Desktop philippelogel$ scp SURFS_UP.mp4 root@62.210.136.52:/vmfs/volumes/5930eb93-ce6a38a6-670b-d4ae52cc127c/ISOs/
Password:
SURFS_UP.mp4 100% 638MB 7.7MB/s 01:23

(base) iMac-Papa:Desktop philippelogel$ scp root@62.210.136.52:/vmfs/volumes/5930eb93-ce6a38a6-670b-d4ae52cc127c/ISOs/SURFS_UP.mp4 toto/
Password:
SURFS_UP.mp4 8% 55MB 1.4MB/s 06:49 ETA

You can see that the problem don’t came from the vms …

So what can I do … I think it’s a problem of the network card or a default driver.

It’s the first time, I’m so lost … a part of my job is to mount hyper-V and ESXI servers.

Thank’s a lot,

Philippe

#12

Some informations about the network card and drivers :

[root@sd-53315:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description


vmnic0 0000:01:00.0 bnx2 Up Up 1000 Full d4:ae:52:cc:12:7c 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
vmnic1 0000:01:00.1 bnx2 Up Down 0 Half d4:ae:52:cc:12:7d 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T

[root@sd-53315:~] esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: 10baseT/Half, 10baseT/Full, 100baseT/Half, 100baseT/Full, 1000baseT/Full
Auto Negotiation: true
Cable Type: Twisted Pair
Current Message Level: 0
Driver Info:
Bus Info: 0000:01:00.0
Driver: bnx2
Firmware Version: 7.4.8 bc 7.4.0 NCSI 2.0.11
Version: 2.2.4f.v60.10
Link Detected: true
Link Status: Up
Name: vmnic0
PHYAddress: 1
Pause Autonegotiate: true
Pause RX: false
Pause TX: false
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver: internal
Virtual Address: 00:50:56:dc:12:7c
Wakeon: MagicPacket™

[root@sd-53315:~] esxcli software vib list | grep gb
net-igb 5.0.5.1.1-5vmw.600.0.0.2494585 VMware VMwareCertified 2017-06-02
net-ixgbe 3.7.13.7.14iov-20vmw.600.3.57.5050593 VMware VMwareCertified 2017-06-05

#13

Hi!

Your speed results againt iperf.online.net seems alright, which is a good news :slight_smile:
Your iperf results against your home seems good too (but again, do not give too much credit to iperf2).

The debit you are getting from these files transfer are indeed an issue.
To be sure to go around all possibilities, can you run the following command to be sure that you are not experiencing bad disk issues.

[root@sd-56032:/vmfs/volumes/5ebc105c-46fd31e7-f5c9-d4ae52cb84d0] time wget http://iperf.online.net/1000Mo.dat
Connecting to iperf.online.net (62.210.18.40:80)
1000Mo.dat           100% |**********************************************************************|   953M  0:00:00 ETA
real	0m 13.12s
user	0m 2.67s
sys	0m 0.00s

On my test server, I manage to download 1Go in 13.12s, which give a 76 Mo/s.
The debit seems to be throttle by the hard disk disk, which is acceptable here.

Also, the issue might come from your home connection directly.
When testing with another server located in another datacenter (also a R210-II), I manage to reach pretty decent speeds.

root@ea71cbeb6893:~# scp root@62.210.180.247:/vmfs/volumes/5ebc105c-46fd31e7-f5c9-d4ae52cb84d0/1000Mo.dat .
1000Mo.dat                                                                           100%  954MB 110.7MB/s   00:08
root@ea71cbeb6893:~# scp ./1000Mo.dat root@62.210.180.247:/vmfs/volumes/5ebc105c-46fd31e7-f5c9-d4ae52cb84d0/1000Mo
1000Mo.dat                                                                           100%  954MB 111.0MB/s   00:08

Can you try downloading your test image from elsewhere than your home computer? :slight_smile:

#14

Thank’s a lot for your feedbacks.

[root@sd-53315:/vmfs/volumes/5930eb93-ce6a38a6-670b-d4ae52cc127c/ISOs] time wget http://iperf.online.net/1000Mo.dat
Connecting to iperf.online.net (62.210.18.40:80)
1000Mo.dat 100% |*****************************************************************************************************************| 953M 0:00:00 ETA
real 0m 19.69s
user 0m 4.00s
sys 0m 0.00s

From one of my vm :
root@www:~/toto# time wget http://iperf.online.net/1000Mo.dat
–2020-05-14 14:17:21-- http://iperf.online.net/1000Mo.dat
Resolving iperf.online.net (iperf.online.net)… 62.210.18.40
Connecting to iperf.online.net (iperf.online.net)|62.210.18.40|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 1000000000 (954M) [application/octet-stream]
Saving to: ‘1000Mo.dat.1’

1000Mo.dat.1 100%[======================================================================================================>] 953,67M 65,9MB/s in 14s

2020-05-14 14:17:35 (68,4 MB/s) - ‘1000Mo.dat.1’ saved [1000000000/1000000000]

real 0m13.953s
user 0m0.384s
sys 0m3.216s

#15

I only have one server in scaleway structure so I’ve tested from one vm to the other, here are the results :

root@www:~/toto# scp 1000Mo.dat root@212.83.134.168:/root
root@212.83.134.168’s password:
1000Mo.dat 100% 954MB 8.1MB/s 01:58
root@www:~/toto# cd …
root@www:~# cd toto/
root@www:~/toto# ls
1000Mo.dat

root@www:~/toto# mv 1000Mo.dat toto1000.dat
root@www:~/toto# cd …
root@www:~# scp root@212.83.134.168:/root/1000Mo.dat toto/
root@212.83.134.168’s password:
1000Mo.dat 100% 954MB 9.7MB/s 01:38

#16

Great!
So first, your server network card seems to work properly, as your disk drives.
The results you got from SCPing between your servers are showing slowness with scp, not the network.
This seems to be a L7 issue with scp that many users encounter (according to the number of post on StackOverflow).
Maybe rsync could help you get better transmit speed.
NB: a static rsync has been compiled by many users for ESXi too: link

#17

Ok I’ve made some changes on the vms …

So it’s pretty good now, but not perfect …

From my first VM

In reverse mode :

Here are the feedbacks from one vm :
root@mail:~# iperf3 -c ping.online.net -R -p 5209
Connecting to host ping.online.net, port 5209
Reverse mode, remote host ping.online.net is sending
[ 4] local 212.83.134.168 port 47680 connected to 62.210.18.40 port 5209
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 69.7 MBytes 585 Mbits/sec
[ 4] 1.00-2.00 sec 70.2 MBytes 589 Mbits/sec
[ 4] 2.00-3.00 sec 64.3 MBytes 539 Mbits/sec
[ 4] 3.00-4.00 sec 67.8 MBytes 569 Mbits/sec
[ 4] 4.00-5.00 sec 68.5 MBytes 575 Mbits/sec
[ 4] 5.00-6.00 sec 67.4 MBytes 566 Mbits/sec
[ 4] 6.00-7.00 sec 68.1 MBytes 571 Mbits/sec
[ 4] 7.00-8.00 sec 66.9 MBytes 562 Mbits/sec
[ 4] 8.00-9.00 sec 66.6 MBytes 558 Mbits/sec
[ 4] 9.00-10.00 sec 60.3 MBytes 506 Mbits/sec


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 682 MBytes 572 Mbits/sec 744 sender
[ 4] 0.00-10.00 sec 672 MBytes 563 Mbits/sec receiver

In standard mode

iperf Done.
root@mail:~# iperf3 -c ping.online.net -p 5209
Connecting to host ping.online.net, port 5209
[ 4] local 212.83.134.168 port 47692 connected to 62.210.18.40 port 5209
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 11.7 MBytes 97.9 Mbits/sec 126 50.9 KBytes
[ 4] 1.00-2.00 sec 8.95 MBytes 75.1 Mbits/sec 131 2.83 KBytes
[ 4] 2.00-3.00 sec 9.82 MBytes 82.4 Mbits/sec 84 49.5 KBytes
[ 4] 3.00-4.00 sec 9.07 MBytes 76.1 Mbits/sec 146 53.7 KBytes
[ 4] 4.00-5.00 sec 9.07 MBytes 76.1 Mbits/sec 203 53.7 KBytes
[ 4] 5.00-6.00 sec 8.89 MBytes 74.5 Mbits/sec 117 69.3 KBytes
[ 4] 6.00-7.00 sec 10.5 MBytes 88.1 Mbits/sec 180 48.1 KBytes
[ 4] 7.00-8.00 sec 8.76 MBytes 73.5 Mbits/sec 167 53.7 KBytes
[ 4] 8.00-9.00 sec 11.6 MBytes 97.0 Mbits/sec 121 38.2 KBytes
[ 4] 9.00-10.00 sec 9.01 MBytes 75.6 Mbits/sec 126 42.4 KBytes


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 97.3 MBytes 81.6 Mbits/sec 1401 sender
[ 4] 0.00-10.00 sec 96.8 MBytes 81.2 Mbits/sec receiver

iperf Done.

#18

The second one :

root@www:~# iperf3 -c ping.online.net -R -p 5209
Connecting to host ping.online.net, port 5209
Reverse mode, remote host ping.online.net is sending
[ 4] local 212.83.135.25 port 53740 connected to 62.210.18.40 port 5209
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 70.4 MBytes 591 Mbits/sec
[ 4] 1.00-2.00 sec 65.9 MBytes 553 Mbits/sec
[ 4] 2.00-3.00 sec 68.9 MBytes 578 Mbits/sec
[ 4] 3.00-4.00 sec 68.9 MBytes 578 Mbits/sec
[ 4] 4.00-5.00 sec 69.3 MBytes 581 Mbits/sec
[ 4] 5.00-6.00 sec 65.7 MBytes 551 Mbits/sec
[ 4] 6.00-7.00 sec 61.4 MBytes 515 Mbits/sec
[ 4] 7.00-8.00 sec 66.3 MBytes 557 Mbits/sec
[ 4] 8.00-9.00 sec 67.0 MBytes 562 Mbits/sec
[ 4] 9.00-10.00 sec 68.7 MBytes 576 Mbits/sec


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 684 MBytes 574 Mbits/sec 490 sender
[ 4] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver

iperf Done.
root@www:~# iperf3 -c ping.online.net -p 5209
Connecting to host ping.online.net, port 5209
[ 4] local 212.83.135.25 port 53746 connected to 62.210.18.40 port 5209
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 9.88 MBytes 82.9 Mbits/sec 173 50.9 KBytes
[ 4] 1.00-2.00 sec 11.2 MBytes 93.8 Mbits/sec 181 33.9 KBytes
[ 4] 2.00-3.00 sec 6.15 MBytes 51.6 Mbits/sec 97 79.2 KBytes
[ 4] 3.00-4.00 sec 11.6 MBytes 97.5 Mbits/sec 199 55.1 KBytes
[ 4] 4.00-5.00 sec 11.2 MBytes 93.8 Mbits/sec 166 72.1 KBytes
[ 4] 5.00-6.00 sec 9.20 MBytes 77.1 Mbits/sec 136 62.2 KBytes
[ 4] 6.00-7.00 sec 11.4 MBytes 95.9 Mbits/sec 127 55.1 KBytes
[ 4] 7.00-8.00 sec 10.2 MBytes 85.5 Mbits/sec 131 55.1 KBytes
[ 4] 8.00-9.00 sec 8.70 MBytes 73.0 Mbits/sec 157 56.6 KBytes
[ 4] 9.00-10.00 sec 10.7 MBytes 89.7 Mbits/sec 199 43.8 KBytes


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 100 MBytes 84.1 Mbits/sec 1566 sender
[ 4] 0.00-10.00 sec 99.8 MBytes 83.7 Mbits/sec receiver

iperf Done.

#19

Why are bandwidth not symetric in reverse and normal mode ?

Have you got an advice for this point ?

#20

Today the same tests

root@mail:~# iperf3 -c bouygues.testdebit.info -R -p 5209
Connecting to host bouygues.testdebit.info, port 5209
Reverse mode, remote host bouygues.testdebit.info is sending
[ 4] local 212.83.134.168 port 39008 connected to 89.84.1.222 port 5209
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 696 KBytes 5.70 Mbits/sec
[ 4] 1.00-2.15 sec 297 KBytes 2.12 Mbits/sec
[ 4] 2.15-3.00 sec 43.8 KBytes 421 Kbits/sec
[ 4] 3.00-4.25 sec 86.3 KBytes 566 Kbits/sec
[ 4] 4.25-5.16 sec 58.0 KBytes 520 Kbits/sec
[ 4] 5.16-6.19 sec 59.4 KBytes 475 Kbits/sec
[ 4] 6.19-7.00 sec 60.8 KBytes 612 Kbits/sec
[ 4] 7.00-8.00 sec 89.1 KBytes 730 Kbits/sec
[ 4] 8.00-9.00 sec 39.6 KBytes 324 Kbits/sec
[ 4] 9.00-10.00 sec 80.6 KBytes 659 Kbits/sec


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.71 MBytes 1.43 Mbits/sec 96 sender
[ 4] 0.00-10.00 sec 1.60 MBytes 1.34 Mbits/sec receiver

iperf Done.
root@mail:~# iperf3 -c bouygues.testdebit.info -p 5209
Connecting to host bouygues.testdebit.info, port 5209
[ 4] local 212.83.134.168 port 39032 connected to 89.84.1.222 port 5209
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 105 KBytes 856 Kbits/sec 8 9.90 KBytes
[ 4] 1.00-2.00 sec 747 KBytes 6.12 Mbits/sec 0 39.6 KBytes
[ 4] 2.00-3.00 sec 1.30 MBytes 10.9 Mbits/sec 15 35.4 KBytes
[ 4] 3.00-4.00 sec 1.30 MBytes 10.9 Mbits/sec 55 22.6 KBytes
[ 4] 4.00-5.00 sec 764 KBytes 6.26 Mbits/sec 1 32.5 KBytes
[ 4] 5.00-6.00 sec 764 KBytes 6.26 Mbits/sec 2 39.6 KBytes
[ 4] 6.00-7.00 sec 764 KBytes 6.25 Mbits/sec 4 32.5 KBytes
[ 4] 7.00-8.00 sec 764 KBytes 6.26 Mbits/sec 6 29.7 KBytes
[ 4] 8.00-9.00 sec 954 KBytes 7.82 Mbits/sec 9 33.9 KBytes
[ 4] 9.00-10.00 sec 1.12 MBytes 9.38 Mbits/sec 34 18.4 KBytes


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 8.47 MBytes 7.11 Mbits/sec 134 sender
[ 4] 0.00-10.00 sec 8.09 MBytes 6.78 Mbits/sec receiver

iperf Done.

It’s catastrophic