Hyper-V failover solution for systems generating large number of files

Hi All,

In our environment we have a 2 node cluster, we also have a cluster located at our DR site that we are using to replicate our VM’s to in case of the event of a DR failure. Our production site has 10 VMs (plus many others), the 10 servers receive data from
our users and write the data back to a PRIMARY server. The PRIMARY server then tags the data, indexes them and saves it to a folder that is created daily. On average, our users/servers generate 10,000 files a day totalling 2GB.

As you can see we have single point of failure with the PRIMARY server and we are looking for a solution that will allow us to have redundancy should the PRIMARY server fail. We have discussed serveral options however none of them feel right due to other
issue they my present or it being extremely complicated.

We have discussed the option to:

  1. Create two new VM’s on the cluster/nodes and within the two VM’s create another cluster, we would then add the file share role between them hosting a UNC/file share. We would replicate the VM’s to DR via Hyper-V and use SAN replication for the data. 
    - The issue we have with this is the VM’s are hosted on the existing cluster, we could need to connect ISCSI directly to the VM’s however the attached storage would not have failover/MPIO capabilities as they are not CSV’s.
  2. Add the file server role to the existing cluster and the 10 servers would all write to the UNC path - The issue here is fail-over with regards to DR. Since our DR cluster is separate from our production site how would we replicate the file server/share
    role to the DR cluster?
    We could use DFRS however i am not sure DFRS is a good idea give the amount of files and data being generated.

I am reaching out to everyone to see if you have any other solutions we could use to achive a our goal simplisticly. I look forward everyone’s suggestions.

Read full post . . . .

Go Que Newsroom