Archives

PMThomas

Drive selection for expanding Hyper-V cluster (Speed vs HDD vs SSD)

I’m just spec’ing up an additional Dell MD1200 drive shelf for my VRTX to add additional storage capacity.

Drives in this shelf will contain VHDX files for my VM’s that run on the hyper-v cluster.

I will have 12 new bays to choose drives for.

Based on performance and recognising that this chassis only support 6G speed, what drive speeds should I be looking for?

With 12 slots I could go for 12 x 2Tb drives in RAID 10 for example or 8 HDD and 4 SSD. There is also a choice of 7.2k and 10k drives.

The way I have my File Server VM’s configured I have a separate VHDX for the C: Operating System and another for the D: data.

In the past I have placed the C: VHDX on faster storage, is this worthwhile?

I have run some speed tests on my existing drive arrays and cannot see any discernible difference in performance between the drive and RAID types.

Thanks in advance for ideas and suggestions.

Read full post . . . .

Drive performance with hyper-v cluster

I have a Dell VRTX which contains 4 server blades each running Server 2016 and internal shared storage which is made available via cluster shared volumes.

The server has a mix of drives as its been upgraded over time:-

4 x 600GB 10k SAS in RAID 10

5 x 600GB 10k SAS in RAID 5

9 x 900GB 10k SAS in RAID 5

4 x 1TB 7.2k SAS in RAID 5

I have run some disk performance tests using ATTO Disk Benchmark from within various VM’s (Some Server 2012 R2 and some 2016). Some VM’s are on different drive arrays (CSV’s)

I am getting pretty consistent results from different VM’s on the same arrays, so I’m comfortable the results are correct.

I still have some older Server 2012 R2 VM’s which give averages of 900MB/s write and 1.7GB/s read on the RAID 10 array. These would have been Gen 1 original VM’s.

I have one Server 2016 VM which was in-place upgraded from 2012 R2, still original IDE controller as first boot drive. This shows speeds of 1.1GB/s write and 2.46GB/s read.

I then created a brand new Server 2016 VM in the cluster, storing it’s VHDX also on the RAID 10 array. This had SCSI as the boot drive type and gave averages of 3.2GB/s write and 3.33GB/s read.

I found these results interesting as it implied that server 2016 was faster on the same hardware, but also that the upgraded server was not as quick as a new one. Could this be as a result of fragmentation, deduplication, or the upgrade process?

So I then created some new small (only 20-30GB in size) VHDX files for both these VM’s, creating one on each of the drive arrays to test with.

With the upgraded 2016 server I was now seeing speeds of 5.75GB/s write and the same for read. Similar speeds were encountered with the brand new Server 2016 VHDX’s.

I also tried a new VM running Server 2019 and saw similar speeds (So some consistency here)

So it appears that my old Server 2012 R2 VM’s and the upgraded 2016 VM are experiencing much slower disk throughput than either a new Server 2016 VM or a new VHDX assigned to them.

Any thoughts as to the reasons? Disk fragmentation over time, dedplication or something else?

My plan is to run the in place upgrade on the remaining Server 2012 R2 servers to make them 2016, however I would like to fix this performance problem as part of this, to ensure they are all running as fast as they can be.

Read full post . . . .

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 46 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1889