Cluster Shared Volume (CSV) Host Issue

I have a pre-production environment with two hosts setup in a Hyper-V cluster. Both hosts are setup exactly the same and both are setup with a CSV mapped to an external storage array. Both hosts are connected to the storage array using external SAS connections
via HBA’s installed in each server. The cluster passes the cluster validation test. I can successfully live migrate VM’s between the hosts, etc.

The host that’s designated the owner of the CSV can read and write directly to the storage array via the external SAS connection, as expected. The non-CSV owner can access the CSV as well, but all read/writes do NOT cross the external SAS interface directly
to the storage appliance and instead all I/O goes out the network interface to the other host (CSV owner) and then to the storage array via the other host.

If I change the CSV owner to the malfunctioning host, the problem reverses. It seems that whichever host is the non-owner of the CSV will not directly send storage I/O to the storage appliance via the SAS connection and instead sends the traffic across the
network to the CSV owner.

Failover Cluster Manager on both hosts shows the cluster shared volume. The CSV is also mapped to both hosts at C:ClusterStorageVolume1. If I copy a file from the local host to the CSV on the CSV owner, the file transfers directly to the storage appliance
via the external SAS interface at roughly 12Gbps (SAS interface speed). If I copy a file from the non-owner CSV host directly to the mapped CSV, the storage I/O goes out the network interface to the other host at approx. 2-3Gbps and then the other host (owner)
sends the storage I/O to the storage appliance.

The network adapter that’s being used for storage I/O is the cluster only interface on the 192.168.1.0/24 network. 

Hardware Setup:
Host 1
External SAS connection to the storage array
MPIO enabled
Two 1Gbps NIC’s teamed for guest traffic to the corporate network - 10.13.0.0/20
One 1Gbps NIC for management traffic 10.13.0.0/20
One 10Gbps NIC for cluster only - live migrations - 192.168.1.0/24

Host 2
External SAS connection to the storage array
MPIO enabled
Two 1Gbps NIC’s teamed for guest traffic to the corporate network - 10.13.0.0/20
One 1Gbps NIC for management traffic 10.13.0.0/20
One 10Gbps NIC for cluster only - live migrations - 192.168.1.0/24

Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 46 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1995