Degraded performance with Core scheduler


Since Core scheduler is now the default in WS2019/Hyper-V Server 2019, we’re now evaluating the impacts of this change.

I (think I) understand the implications of the Core scheduler, but from our testing, it’s severely degrading performance even in scenarios where it (in my opinion) shouldn’t.

Testing scenario: Dell PE R740xd with Xeon 6244 Gold (2 CPUs, 8 physical cores, HT enabled), Hyper-V Server 2019 installed with updates up to 2019-11 (since I couldn’t install 2019-12 CU on ANY Hyper-V Server 2019 host). Single testing VM running on the

 - WS201 Standard, 4 vCPUs, 2-8GB vRAM (dynamic), VM config version 9, HWThreadCountPerCore=0 (ie. SMT enabled, default on 2019)

We’re seeing degraded CPU performance in the VM under Core scheduler. I thought that Core scheduler would be limiting VM density (and therefore performance), but only in situations where there are many VMs (vCPUs) running/assigned on the host.

We tested synthetic performance with Passmark PerformanceTest8 and Geekbench5.

Here’s a comparison (the only change is the scheduler used, nothing else is modified):



Is anyone else seeing similar behavior? Is this expected?

If this is expected even in a scenario like this (single/few VMs), then we’ll be forced to switch to classic scheduler for the time being.

Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 36 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/taxonomy.php on line 3613