Simulating power failure in VM

I’m working on power failure consistency. This is very hard to test and debug for what should be obvious reasons.

Running my code in a VM helps because I can hit the "power off" button on the Hyper-V management window frame. However, that doesn’t allow me to specify where in my code I want the power failure event to occur, which is what I really need.

Is there a way in a VM to tell the hypervisor to simulate a power failure, i.e., do the same thing as hitting the "stop" button, at a specific point in code? Maybe executing a particular illegal instruction would do it?

If not, that would be a wonderful addition to Hyper-V because it would make testing power failure scenarios much easier.

Thanks in advance for any assistance.

Read full post . . . .

Failed Backup Jobs & Hyper-V Checkpoints


I have a ‘Windows Server 2019 Standard’ Hyper-V Host.

On the seven occasions that my Veeam backup job for a Virtual Machine has failed, a non-standard checkpoint with an odd icon for the Virtual Machine has been created/left on my Hyper-V Host.

When I right-click on one of these checkpoints, there is no option to delete it and hopefully trigger a merge.

How can I cleanly delete each of these lingering backup based checkpoints whilst preserving the integrity of the production Virtual Machine?

I would prefer to do this in stages by the automatic merging of checkpoints if possible.  I’m also keen that
all changes are written back to the main disk files (VHDX) automatically.

Any help gratefully received.

Kind regards,


Read full post . . . .

Failed to map guest I/O buffer for write access with status 0xC0000044

Hi guys,

i got this event which is comming back every day.  Hypervisor OS is Server 2019 

Failed to map guest I/O buffer for write access with status 0xC0000044

within the vm i got this error at the same time. (server 2012 R2)

Source: Disk event 153

Event Description: The IO operation on logical block 0x73d3682 for disk 0 (PDO name: Device 00000039 ) has been retried.


Read full post . . . .

Linux VM Not Booting


I have converted my VirtualBox Centos 7 VM (.vmdk) to .VHD and created a new VM in Hyper-v.

However, once started all my partitions cannot be found and ultimately boot fails.

Could you advise please.

Thank you!

Read full post . . . .

Changing the update coordinator for CAU & Access Denied error

I’m working on setting up CAU for a three node cluster of Hyper V nodes. I’ve set it up in self updating mode and "RequireAllNodesOnline" setting enabled in the Run Profile. I’m running CAU v10.0.10011.16384. I’m running into an issue where it
is failing with the error: "Failed to restart "servername": (ClusterUpdateException) Failed to restart "servername": (Win32Exception) Access is denied ==> (Win32Exception) Access is denied".

I understand this is a permission issue but I’ve already:

- made sure to enable the firewall on all the servers in the cluster to allow via an inbound rule the "Remote Shutdown" group of protocols

- I’ve verified that the CAU temp account it creates is being added to the "Administrators" group on each cluster node. 

I want to make sure I don’t offer too lax security in setting this up so any ideas on what I’m missing. 

I’ve had an idea of adding all the cluster nodes to each other’s "Administrator" group and "Remote Management Users" group but that would open up any user on those servers to this access so that is why that has stayed an idea. 

Also, I’ve noticed that the Update Coordinator is a another node in the cluster instead of a 3rd party server coordinating the CAU. I’ve read from the Windows docs here that the Update Coordinator should not be a part of the cluster. I may have set
this up on a node to begin with but how do I change the Update Coordinator now?

Anyways any help would be appreciated. 

Read full post . . . .

choose live migration or quick migration in certain conditions


i want to ask, what default method live /Quick migration is used by hyper v

 - when one node has failure/or crash by hardware/power lost, and If there is a vm on the host it will be automatically migrate by live migration or quick migration?

- when one node has ben restart by manually, without shutdown the VM is use live/quick migration?

because when im read many article quick migration need more downtime than live migration.

when maintance the host ( installing memory etc), what method can i used? live migration or quick migration?


Read full post . . . .

1808 Cluster AutoBalancer event invoked when host resources not taxed


I have a two-node Hyper-V HA cluster running on Server 2016.  This is in a test environment with very little load on it now.  There are a total of 5 VMs (all running server 2016), with three (DB1, DB2, and App) on node1 and two on node2. 
I have not set any preferred nodes for the VMs, and Balancer settings are default for the cluster.  The three VMs on node1 communicate with each other mode, which is why they are together.  The two hosts are identical with 16 cores (2 sockets) and
64GB of RAM each.  Hyperthreading is disabled and virtualization is enabled in the BIOS.  Each VM in the cluster has 1 vCPU and 4GB of RAM, with the exception of DB1, which has 2 vCPUs.

The cluster node assignments have been in this configuration for a while, working fine.  I’ve always been able to live migrate them between the nodes without issue when needed for maintenance, but they are always put back on their respective nodes for
normal operation.

A couple days ago, McAfee on App went nuts and consumed a lot of CPU for a sustained period.  This appears to have been the catalyst for
the 1808 event and the cluster moving both that server and DB2 to the other node.  We were able to quiet down McAfee later so it wouldn’t consume so much CPU on App, but didn’t see anything on DB2 that would increase utilization.

Knowing all that, my question is: Why would the cluster move a server from one node to the other when the physical resources of the original node it was on were not even close to hitting any limits?  I could have had all five VMs on one node, with a
total of 6 procs and 20GB of RAM, and if all the VMs combined were using all of their virtual resources it would still be much lower than what the physical host has.  So, to ask my question in another way… Shouldn’t the reason to trigger an AutoBalancer
event be caused by collective Host utilization and not that of the VMs on them per se?

Please explain why this VM moved on it’s own when host resources were not taxed.

Thank You

Read full post . . . .

Access local disks from Hyper-V Windows XP

I’m sure I’m missing something really obvious, but having created a virtual Windows XP machine under Hyper-V on Win 10, how do I access the local disk on the real machine?

I have a program to install from there

Thanks (and again, apologies if the answer is obvious)

Phil G

Read full post . . . .

Can’t transfer VHD file to NTFS External Hard Drive

I am on Windows 10 Ent 64 Bit and I’m trying to copy over a Hyper-V VHD Hard Drive that is 190 GB. I have 250 GB from on my NTFS External Hard Drive. When I try to copy over the file it fails saying that I need more space.

Should I change to ExFat Partition or should I try another copy routine?

I’ve tried the Hyper-V export, but failed, so I am copying over manually.

Read full post . . . .

Hyper-v QOS

Hello everyone,

Windows 2016 hyper-v qos configured the bandwitdh in weight mode, how do we ensure that inbound traffic will not saturate other network taffic,   and what is recommended teaming mode?


Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 40 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1995