Archives

Loss of VMs when server is restarted but its roles had already been drained to another server

Hi,

We have a failover cluster of two Windows Server 2012 Datacenter hyper-v host which are NOT managed by VMM. All management of them is done via Failover Cluster Manager.  They use cluster shared volumes (CSVs) for their storage.

When the patching admin wants to patch the servers he drains roles on the first host in failover cluster manager and the VMs get live migrated to the second host.  The VMs are running and accessible at this time, however when he performs patching on
the first server and restarts it, the VMs on the second server go down.

I was of the understanding that since the storage is CSVs and is accessible to both hosts and the live VM workload is running on the second server, this shouldn’t happen?  But it seems like the VMs lose access to storage and things go awry and you can
no longer ping them VMs on host 2 and failover cluster manager on host 2 freezes until the 1st host finishes rebooting.  Once the first host comes back up, the VMs go live again.

I’ve had a look at Failover Cluster Manager and cant see anything amiss. I did note that the disk’s Owner Node in the disks section is the 1st host.  Do we need to manually change the Owner Node to the 2nd host with the live vm load before we take down
the 1st ?  I would’ve thought that control of the storage would be, well.., seamless. I thought because it was cluster shared volumes they both had access to the disk at all times.

Appreciating any assistance!

Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 53 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1889