Hyper-V 2019 Slow Network Performance compared to 2012 R2

We run a combination of 2012 R2 and 2019 servers in separate clusters. We are experiencing a frustrating issue with our 2019 Hyper-V Servers. VM to VM network speeds are inferior on the 2019 hosts compared to identical VMs on the 2012 R2 Servers (around
1/3 of the speed)

We approached Microsoft and they suggested that they could see a number of network errors on the external NICs, so I approached the vendor (Dell). 
They suggested a couple of firmware updates to the NICs and the FC cards, which we performed.
 They also suggested a downgrade of the Intel NIC drivers - I had already tried the ones from the Intel site to see if this improved matters.

When this didn’t fix the issue Dell then wanted to go down the route of replacing hardware, which I resisted since we were having the same performance issues on all 6 hosts & it simply isn’t possible that all the hosts have the same hardware fault.

I then decided to create a Private VSwitch on one of the 2019 hosts. 
I connected my test VMs to this switch and experienced the same poor network speeds, which leads me to believe that the issue is unrelated to the NIC hardware or drivers, but the issue lies more fundamentally with how 2019 handles VSwitch traffic compared
to 2012 R2

I have tried a number of "Fixes" suggested online, such as disabling VMQ and RSC, but these made no difference.

Has anyone else experienced this issue?

Read full post . . . .

Possible to get statistics for MinimumBandwidthWeight?

Question regaring diagnosing traffic on converged network.

Setup is….

8 x 1GB Physical NICs in a LBFO team - Balanced Mode

1 x Hyper-V Switch.  
DefaultFlowMinimumBandwidthWeight 50

3 x VirtualNetwork Adaptors;
1 = ManagementOS - MinimumBandwidthWeight 10
2 = Cluster - MinimumBandwidthWeight 10
3 = LiveMigration - MinimumBandwidthWeight 30

We are having issues with less than optimal transfer speeds during Veeam backups.

I am trying to diagnose how much of the allowed weighted amount they are using and if any of the parameters above could be limiting the transfer speeds. Are there any stats that can be pulled for MinimumBandwidthWeight or DefaultFlowMinimumBandwidthWeight

Anyone any advise on this?

Read full post . . . .

Dropped Packets Outgoing/Sec

We have a number of Hyper-V hosts running Windows Server 2019  each with 8 1GB NICs in a LBO Team.

I have VM-NetworkAdpators for Management, Cluster and Live Migraiton.

I have noticed that on the VMNetwork Adaptors we have around 200 dropped packets per second Incoming.

On the vSwitch there are around 2000 dropped packets per second Outgoing.

VMQ is disabled on all network adaptors.

We are not seeing and disruption inside the VMs, but I am concered as to why there drops are occuring.

Has anyone here seen this before?

Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 53 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1995