Go-Que offers news and technical knowledge articles from all IT visualization software brands. Go-que is sponsored by go-que.net & go-que.nl.

latency when live migration


I’ve been experiencing latency issues when  Live Migrate a VM from one node to another, I get 2-3 timeouts when  migration process from my VM. this is normal? My NICs 1Gb are direct connect  at 2 node server for only live migration network
with different subnet with another connection (192.168.xx.xx)

Is there any way to improve this?

Thanks for any answers.

Read full post . . . .

Trunk VM network


I have a server and it have windows server 2016 

I want to create a VM network for my kerio server and i use from it command in powershell

for trunking vm Network 

Set-VMNetworkAdapterVlan -VMName Redmond -Trunk -AllowedVlanIdList 1-100 -NativeVlanId 10

unfortunately my server kerio don’t work rot vlan.

can you help me?

Read full post . . . .

Upgrading Hyper-V 2008 R2 Core to 2012 R2

I’m trying to upgrade a Hyper-V 2008 R2 core to 2012 R2 Core since I know there’s not a direct path to 2016. I’ve downloaded an .ISO file from MS that I believe is the Core version of 2012 R2. During the setup process, I elect to upgrade instead of the custom
solution which I believe will install a fresh copy of the product. I then get a message that informs me that I can’t upgrade the version I’m running which again is the Core version. 

My question is if I decide to install a fresh instance of 2012 R2 core (or 2016 at that matter), can I (manually) restore my VMs in the newer version? Other than backing up the VHDs and XML files, is there anything else I need to successfully restore them?
Would using the import process work in this case once the new version is up and running? Or worse case, could I make note of the key VMs settings and rebuild the VM with the same settings and point to the existing VHD from 2008 R2?

The environment that I’m working with is not a production environment but there are some VMs that I don’t want to rebuild from scratch. All of my VMs are located on the D drive which is different from the installation drive. 

Thanks in advance for your help.


Read full post . . . .

Simulating power failure in VM

I’m working on power failure consistency. This is very hard to test and debug for what should be obvious reasons.

Running my code in a VM helps because I can hit the "power off" button on the Hyper-V management window frame. However, that doesn’t allow me to specify where in my code I want the power failure event to occur, which is what I really need.

Is there a way in a VM to tell the hypervisor to simulate a power failure, i.e., do the same thing as hitting the "stop" button, at a specific point in code? Maybe executing a particular illegal instruction would do it?

If not, that would be a wonderful addition to Hyper-V because it would make testing power failure scenarios much easier.

Thanks in advance for any assistance.

Read full post . . . .

Failed Backup Jobs & Hyper-V Checkpoints


I have a ‘Windows Server 2019 Standard’ Hyper-V Host.

On the seven occasions that my Veeam backup job for a Virtual Machine has failed, a non-standard checkpoint with an odd icon for the Virtual Machine has been created/left on my Hyper-V Host.

When I right-click on one of these checkpoints, there is no option to delete it and hopefully trigger a merge.

How can I cleanly delete each of these lingering backup based checkpoints whilst preserving the integrity of the production Virtual Machine?

I would prefer to do this in stages by the automatic merging of checkpoints if possible.  I’m also keen that
all changes are written back to the main disk files (VHDX) automatically.

Any help gratefully received.

Kind regards,


Read full post . . . .

Failed to map guest I/O buffer for write access with status 0xC0000044

Hi guys,

i got this event which is comming back every day.  Hypervisor OS is Server 2019 

Failed to map guest I/O buffer for write access with status 0xC0000044

within the vm i got this error at the same time. (server 2012 R2)

Source: Disk event 153

Event Description: The IO operation on logical block 0x73d3682 for disk 0 (PDO name: Device 00000039 ) has been retried.


Read full post . . . .

Linux VM Not Booting


I have converted my VirtualBox Centos 7 VM (.vmdk) to .VHD and created a new VM in Hyper-v.

However, once started all my partitions cannot be found and ultimately boot fails.

Could you advise please.

Thank you!

Read full post . . . .

Changing the update coordinator for CAU & Access Denied error

I’m working on setting up CAU for a three node cluster of Hyper V nodes. I’ve set it up in self updating mode and "RequireAllNodesOnline" setting enabled in the Run Profile. I’m running CAU v10.0.10011.16384. I’m running into an issue where it
is failing with the error: "Failed to restart "servername": (ClusterUpdateException) Failed to restart "servername": (Win32Exception) Access is denied ==> (Win32Exception) Access is denied".

I understand this is a permission issue but I’ve already:

- made sure to enable the firewall on all the servers in the cluster to allow via an inbound rule the "Remote Shutdown" group of protocols

- I’ve verified that the CAU temp account it creates is being added to the "Administrators" group on each cluster node. 

I want to make sure I don’t offer too lax security in setting this up so any ideas on what I’m missing. 

I’ve had an idea of adding all the cluster nodes to each other’s "Administrator" group and "Remote Management Users" group but that would open up any user on those servers to this access so that is why that has stayed an idea. 

Also, I’ve noticed that the Update Coordinator is a another node in the cluster instead of a 3rd party server coordinating the CAU. I’ve read from the Windows docs here that the Update Coordinator should not be a part of the cluster. I may have set
this up on a node to begin with but how do I change the Update Coordinator now?

Anyways any help would be appreciated. 

Read full post . . . .

choose live migration or quick migration in certain conditions


i want to ask, what default method live /Quick migration is used by hyper v

 - when one node has failure/or crash by hardware/power lost, and If there is a vm on the host it will be automatically migrate by live migration or quick migration?

- when one node has ben restart by manually, without shutdown the VM is use live/quick migration?

because when im read many article quick migration need more downtime than live migration.

when maintance the host ( installing memory etc), what method can i used? live migration or quick migration?


Read full post . . . .

1808 Cluster AutoBalancer event invoked when host resources not taxed


I have a two-node Hyper-V HA cluster running on Server 2016.  This is in a test environment with very little load on it now.  There are a total of 5 VMs (all running server 2016), with three (DB1, DB2, and App) on node1 and two on node2. 
I have not set any preferred nodes for the VMs, and Balancer settings are default for the cluster.  The three VMs on node1 communicate with each other mode, which is why they are together.  The two hosts are identical with 16 cores (2 sockets) and
64GB of RAM each.  Hyperthreading is disabled and virtualization is enabled in the BIOS.  Each VM in the cluster has 1 vCPU and 4GB of RAM, with the exception of DB1, which has 2 vCPUs.

The cluster node assignments have been in this configuration for a while, working fine.  I’ve always been able to live migrate them between the nodes without issue when needed for maintenance, but they are always put back on their respective nodes for
normal operation.

A couple days ago, McAfee on App went nuts and consumed a lot of CPU for a sustained period.  This appears to have been the catalyst for
the 1808 event and the cluster moving both that server and DB2 to the other node.  We were able to quiet down McAfee later so it wouldn’t consume so much CPU on App, but didn’t see anything on DB2 that would increase utilization.

Knowing all that, my question is: Why would the cluster move a server from one node to the other when the physical resources of the original node it was on were not even close to hitting any limits?  I could have had all five VMs on one node, with a
total of 6 procs and 20GB of RAM, and if all the VMs combined were using all of their virtual resources it would still be much lower than what the physical host has.  So, to ask my question in another way… Shouldn’t the reason to trigger an AutoBalancer
event be caused by collective Host utilization and not that of the VMs on them per se?

Please explain why this VM moved on it’s own when host resources were not taxed.

Thank You

Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 36 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1996