Archives

PMThomas

Understanding resilient options for File Server in cluster

So I have an established Server 2016 Hyper-V Cluster, running a variety of Windows server VM’s.

If I take an existing File Server VM, it can run on any one of the 4 nodes and can be live migrated around easily. (I mean general purpose file server for users documents)

What options/solutions are there to both increase resilience and ease maintenance tasks for file servers?

We already use DFS-N so the FS would be accessed from DOMAINShare by the users, but only the FS VM itself is contained in the DFS-N root/config. (This was simply done after I got fed up with UNC paths changing every time I replaced a physical FS, way
back when!)

I have done various reading online and I feel I’m missing a concept or a solution here.

What I really want, is a 2nd FS VM, that accesses the same Data drive that’s being shared by the first VM, that can be added as a 2nd server in the DFS-N config, so that if one FS VM is offline for maintenance (for example) Staff can still access their data
on the DOMAINShare. It would make it easier for me to do maintenance during the day with a 2nd, or even 3rd VM running.

Maintenance on the Hyper-V Custer Nodes is easy during the day, I just live migrate running VM’s to another node and carry on with the work.

I have read up on host-clusters, shared VHDX files but I am still confused as to how to achieve my goal.

If it makes any difference, this is all running on a Dell VRTX server, which has 4 server blades and built in local storage thats made available to all 4 blades via MPIO.

Read full post . . . .

DFS and resilient FS options

Morning

I have a 4 node Server 2016 Hyper-v cluster running on a Dell VRTX (Built in shared storage), this provides a number of VM’s including a few File servers. 

We use DFS namespace for most of the file servers, simply to make it easier as servers get changed, upgraded, renamed etc.

Lets take the example of FS1 (File server1), it has a DFS namespace with a number of folders/shares under it.

I want a solution to have a FS2 that also shares the same DFS name space, so that we can do maintenance on FS1 while users continue to work. Then maintenance on FS2 while FS1 is back up.

What options are available with our Hyper-v Cluster to achieve this?

One of the other File servers is simply for users H: drives, a share for each user exists, for example FS3fred$.

What options do we have for a 2nd server for this, to allow the same kind of maintenance?

This cluster all exists on one site, in one rack.

Thanks in advance for suggestions and ideas 🙂

Read full post . . . .

Drive selection for expanding Hyper-V cluster (Speed vs HDD vs SSD)

I’m just spec’ing up an additional Dell MD1200 drive shelf for my VRTX to add additional storage capacity.

Drives in this shelf will contain VHDX files for my VM’s that run on the hyper-v cluster.

I will have 12 new bays to choose drives for.

Based on performance and recognising that this chassis only support 6G speed, what drive speeds should I be looking for?

With 12 slots I could go for 12 x 2Tb drives in RAID 10 for example or 8 HDD and 4 SSD. There is also a choice of 7.2k and 10k drives.

The way I have my File Server VM’s configured I have a separate VHDX for the C: Operating System and another for the D: data.

In the past I have placed the C: VHDX on faster storage, is this worthwhile?

I have run some speed tests on my existing drive arrays and cannot see any discernible difference in performance between the drive and RAID types.

Thanks in advance for ideas and suggestions.

Read full post . . . .

Drive performance with hyper-v cluster

I have a Dell VRTX which contains 4 server blades each running Server 2016 and internal shared storage which is made available via cluster shared volumes.

The server has a mix of drives as its been upgraded over time:-

4 x 600GB 10k SAS in RAID 10

5 x 600GB 10k SAS in RAID 5

9 x 900GB 10k SAS in RAID 5

4 x 1TB 7.2k SAS in RAID 5

I have run some disk performance tests using ATTO Disk Benchmark from within various VM’s (Some Server 2012 R2 and some 2016). Some VM’s are on different drive arrays (CSV’s)

I am getting pretty consistent results from different VM’s on the same arrays, so I’m comfortable the results are correct.

I still have some older Server 2012 R2 VM’s which give averages of 900MB/s write and 1.7GB/s read on the RAID 10 array. These would have been Gen 1 original VM’s.

I have one Server 2016 VM which was in-place upgraded from 2012 R2, still original IDE controller as first boot drive. This shows speeds of 1.1GB/s write and 2.46GB/s read.

I then created a brand new Server 2016 VM in the cluster, storing it’s VHDX also on the RAID 10 array. This had SCSI as the boot drive type and gave averages of 3.2GB/s write and 3.33GB/s read.

I found these results interesting as it implied that server 2016 was faster on the same hardware, but also that the upgraded server was not as quick as a new one. Could this be as a result of fragmentation, deduplication, or the upgrade process?

So I then created some new small (only 20-30GB in size) VHDX files for both these VM’s, creating one on each of the drive arrays to test with.

With the upgraded 2016 server I was now seeing speeds of 5.75GB/s write and the same for read. Similar speeds were encountered with the brand new Server 2016 VHDX’s.

I also tried a new VM running Server 2019 and saw similar speeds (So some consistency here)

So it appears that my old Server 2012 R2 VM’s and the upgraded 2016 VM are experiencing much slower disk throughput than either a new Server 2016 VM or a new VHDX assigned to them.

Any thoughts as to the reasons? Disk fragmentation over time, dedplication or something else?

My plan is to run the in place upgrade on the remaining Server 2012 R2 servers to make them 2016, however I would like to fix this performance problem as part of this, to ensure they are all running as fast as they can be.

Read full post . . . .

Go Que Newsroom

Categories