MarkosP

Degraded performance with Core scheduler

Hi.

Since Core scheduler is now the default in WS2019/Hyper-V Server 2019, we’re now evaluating the impacts of this change.

I (think I) understand the implications of the Core scheduler, but from our testing, it’s severely degrading performance even in scenarios where it (in my opinion) shouldn’t.

Testing scenario: Dell PE R740xd with Xeon 6244 Gold (2 CPUs, 8 physical cores, HT enabled), Hyper-V Server 2019 installed with updates up to 2019-11 (since I couldn’t install 2019-12 CU on ANY Hyper-V Server 2019 host). Single testing VM running on the
host:

 - WS201 Standard, 4 vCPUs, 2-8GB vRAM (dynamic), VM config version 9, HWThreadCountPerCore=0 (ie. SMT enabled, default on 2019)

We’re seeing degraded CPU performance in the VM under Core scheduler. I thought that Core scheduler would be limiting VM density (and therefore performance), but only in situations where there are many VMs (vCPUs) running/assigned on the host.

We tested synthetic performance with Passmark PerformanceTest8 and Geekbench5.

Here’s a comparison (the only change is the scheduler used, nothing else is modified):

Passmark

Geekbench:

Is anyone else seeing similar behavior? Is this expected?

If this is expected even in a scenario like this (single/few VMs), then we’ll be forced to switch to classic scheduler for the time being.

Read full post . . . .

MAC flapping with SET vSwitch

Hi.

Our network guys recently reported that they’re observing flapping MAC addresses between physical switches.

After analysis, we found out that the reported MACs are VMs’ MACs on various standalone Hyper-V hosts. We use SET vSwitches (Switch-embedded teaming with Dynamic distribution) on all hosts. The flapping MACs are only from some VMs on some hosts and in case
also MACs of the pNICs on the host.

Each Hyper-V host is physically connected using 2 (or 4) pNICs with 1 (or 2) connections to every physical switch (total of 2).

According to documentation (https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/virtualization/hyper-v-virtual-switch/RDMA-and-Switch-Embedded-Teaming.md#mac-address-use-on-transmitted-packets)
this shouldn’t be happening and Hyper-V should be replacing the MAC when communicating over the non-affinitized member.

Any ideas what could be wrong? Some misconfiguration or bug?

Read full post . . . .

Cannot perform shared-nothing live migration from a management VM

Hello.

I have several standalone Hyper-V Server 2016 hosts. All hosts are configured for live migration using constrained delegation and Kerberos (as detailed here https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/set-up-hosts-for-live-migration-without-failover-clustering).

The problem is despite these settings (LM enabled, LM network set, Kerberos protocol configured, constrained delegation configured), when I initiate the migration using Hyper-V manager or Powershell from a management computer, this fails with the dreaded
error (below). However when I initiate migration directly from the source Hyper-V host (using Powershell as there’s no GUI), the migration successfully proceeds.

The management server is in the same domain and I’m logged on as domain admin.

The error is:



There are events on the source hosts:

Event 20308, Hyper-V-VMMS: "Failed to authenticate the connection at the source host: no suitable credentials available."

Event 20306, Hyper-V-VMMS: "The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host ‘’: No credentials are available in the security package (0x8009030E)."

Event 21024, Hyper-V-VMMS: "Virtual machine migration operation for ‘’ failed at migration source ‘’. (Virtual machine ID 2CF6050E-A08A-4B0C-A321-648AF12517B4)"

Events on the destination host:

Event 22040, Hyper-V-VMMS: "Failed to receive data for a Virtual Machine migration: An existing connection was forcibly closed by the remote host. (0x80072746)."

Event 20402, Hyper-V-VMMS: "The Virtual Machine Management Service failed to authenticate the connection for a Virtual Machine migration at the destination host: An existing connection was forcibly closed by the remote host. (0x80072746)."

Event 20400, Hyper-V-VMMS: "The Virtual Machine Management Service blocked a connection request for a Virtual Machine migration from client address ‘192.168.30.7’: An existing connection was forcibly closed by the remote host. (0x80072746)."

Here’s a screenshot of constrained delegation configuration of one of the hosts, the configuration is identical on all hosts, ie. cifs and Microsoft Virtual System Migration Service allowed for all other Hyper-V hosts.

Any help is appreciated.

Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 40 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1995