Archives

Monthly Archives: October 2018

Unable to ping or UNC path from a 2012r2 physical hyper-v server to a 2012r2 physical Hyper-v server on a different sub net

Hi Guys,

We have three hyper-v Windows 2012r2 DC servers. Two are on the same network which 172.16.64.0, the third is on network 172.16.128.0. The two hyper-v servers on Network 64.0 we’ll call Hyper-v4 & hyper-v5 and the one on network 128.0 we’ll call hyper-v10.
All three servers NIC’s are setup exactly the same way. 

From any server on Network 128.0 we can ping any server on Network 64.0 and use UNC paths to connect to any system. From any server EXCEPT hyper-v4 & hyper-v5 on network 64.0 we can ping any server on network 128.0 and connect using UNC paths. We also
have a third network 192.168.220.0 that we can ping any server on Networks 64.0 & 128.0 as well as connect to using a UNC path. 

This is the problem we’re having.

We can not figure out why the two Hyper-v servers on network 64.0 can’t ping or do a UNC path to the Hyper-v10 server on network 128.0.

Any help would be greatly appreciated. 

Read full post . . . .

Hyper-V switch changes to internal automagically

I have a 2012R2 server running Hyper-V.    Everything has been running just fine until I arrived this morning to find one of the VMs on the system not talking on the network.    Turns out the web server VM has a hard coded address of
172.16.0.111 but is not communicating.   Further investigation unearthed the fact that the ONLY VSwitch on the host was set to "Internal".   HyperV failed to allow me to change this back to external.   Creating a new
VSwitch for external access worked fine.   I moved all VM’s onto this newly created External switch and all is fine.   Oddest part of this issue is that the original and only VSwitch was set to External and working fine until this morning. 
 

I can get into details regarding hardwares, etc. but I wondered if anyone else had seen such behaviour.    The host has seen no changes in the past 10 days, patching etc. and was rebooted 10 days ago without issues.

Read full post . . . .

Hyper-V performance hit on Intel Optane

Greetings everyone,

 

I’m having an odd situation where the low queue depth random reads and writes on an Intel 905P (Optane, 960GB) PCIe card are cut drastically when running within a Hyper-V VM.  This
happens on a Dell R740XD server with dual Xeon Gold CPU’s as well as an AMD 1950X workstation, both times using a fully updated full GUI version of Windows 2016.

 

For testing, I’m using CrystalDiskMark 5.2.0, default settings.

 

On bare metal, queue depth 1 random reads are around 63500 IOPS and random writes are around 60000 IOPS.

However, if I install the Hyper-V role on that same computer, install a brand new VM that is running a fully updated version of Windows 2016, and then run the benchmark, the random reads
and writes are down by around 2/3 to 16700 read IOPS and 16000 write IOPS.

 

Example Data:

Intel 905P on bare metal (Threadripper 1950x) - HyperV role not installed
CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
———————————————————————-
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :  2709.755 MB/s
  Sequential Write (Q= 32,T= 1) :  2378.800 MB/s
  Random Read 4KiB (Q= 32,T= 1) :   503.756 MB/s [122987.3 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   473.558 MB/s [115614.7 IOPS]
         Sequential Read (T= 1) :  2461.759 MB/s
        Sequential Write (T= 1) :  2204.791 MB/s
   Random Read 4KiB (Q= 1,T= 1) :   259.845 MB/s [ 63438.7 IOPS]
  Random Write 4KiB (Q= 1,T= 1) :   245.729 MB/s [ 59992.4 IOPS]

  Test : 4096 MiB [D: 0.0% (0.2/894.3 GiB)] (x5)  [Interval=5 sec]
  Date : 2018/10/30 14:22:12
    OS : Windows Server 2016 Server Standard (full installation) [10.0 Build 14393] (x64)
  

Intel 905P inside a VM on the same hardware
CrystalDiskMark 5.2.0 x64 (C) 2007-2016 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
———————————————————————-
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :  2702.230 MB/s
  Sequential Write (Q= 32,T= 1) :  2367.909 MB/s
  Random Read 4KiB (Q= 32,T= 1) :   516.844 MB/s [126182.6 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   494.393 MB/s [120701.4 IOPS]
         Sequential Read (T= 1) :  2184.590 MB/s
        Sequential Write (T= 1) :  1975.321 MB/s
   Random Read 4KiB (Q= 1,T= 1) :    68.513 MB/s [ 16726.8 IOPS]
  Random Write 4KiB (Q= 1,T= 1) :    64.880 MB/s [ 15839.8 IOPS]

  Test : 4096 MiB [E: 0.1% (0.1/126.9 GiB)] (x5)  [Interval=5 sec]
  Date : 2018/10/31 6:13:05
    OS : Windows Server 2016 Server Standard (full installation) [10.0 Build 14393] (x64) 

 

The only files on the 905P are a second disk in the VM that I’m running the benchmark on.  The OS disk is on another separate drive.  I’ve tried both a fixed size VHDX, a dynamically
assigned VHDX, as well as a physical hard disk attached, all with the same basic result of the QD1 numbers being much smaller.

 

As this is happening on a Dell R740XD server and an AMD workstation (non-dell), and I’ve tried other 905P drives just to rule that out, I’m left with Hyper-V.  What I think it might
be is some storage IO default throttling in place, but I do not know where to look.

 

For reference, for a project at work we installed dual Intel P4800X (enterprise version of the 905P) for a 2012 SQL server for TempDB as it helped increase performance.  The server
is a Dell R740XD, and I would like to virtualize the instance and take advantage of the SA benefit of unlimited virtualization since we have all the physical cores licensed (Enterprise).  Upon initial set up of the physical server before HyperV I ran
benchmarks on the 4800X and all looked great.  Once the Hyper-V role was installed and the VM was running I saw the very reduced performance.

I was able to duplicate my results using the 905P, so that is where my testing is happening now.

Any thoughts on where to look?

Appreciate it everyone,
Mark

Read full post . . . .

Cannot replicate one of VMs

Hello.

we have problem replicate one VM.

Host WS2012R2, VM Guest WS2012R2

When i try replicate another it works.

Eventlog:


Microsoft-Windows-Hyper-V-VMMS

EventID 18012

Found stale snapshot entry

after that

EventID 33676

Replication operation for virtual machine ‘XX’ failed: The system cannot find the path specified. (0x80070003

Eventlog analytic:

Found stale snapshot entry (A1687DC8-4A47-4A21-86C1-FE596C6A033D) in snapshot list for realized VM A282D9F0-DD3A-4B79-96EE-CAAF661990A0 - ignoring snapshot!

ConfigRepository::StoreConfiguration failed to import VM configuration (HRESULT 0x80070003)!

I try vssadmin list snapshots - nothing

I try disksnaphot and list snapshots - nothing

In HV manager, no checkpints

Look at system volume information - nothing , only small old DPM bitmap files ( dpm uninstalled)

Integration tool current and working

Have anyone any idea what to do?

Thanks, Jan

Read full post . . . .

New KB articles published for the week ending 27th October,2018

VMware App Volumes Directory -> Users -> User page does not display assigned appstacks if appstack assigned to group Date Published: 23-Oct-18 VMware ESXi “VASpace (08/13) DiskDump: Partial Dump: Out of space o=0x63ff800 l=0x1000″ error in ESXi host Date Published: 24-Oct-18 Software iSCSI adapter in ESXi disappears after an upgrade Date Published: 26-Oct-18 Position of

The post New KB articles published for the week ending 27th October,2018 appeared first on VMware Support Insider.

Continue reading..

adding a software raid disk C, to a hyper-converged infrastructure

Hey Guys,

I have a 2 *LIVE* node S2D infrastructure, with storage pool configure and running,

one of the servers has only 1 SAS DISK used for OS, and we want to add a software raid for it,

my problem is:

once i add the physical disk to the server, it automatically joins it to the S2D storage pool, 

i used remove-disk from the storage pool, but i still cant see the disk from disk-part or disk management to configure software RAID mirror,

what am i missing,

Thank you,

Read full post . . . .

Cluster Shared Volume (CSV) Host Issue

I have a pre-production environment with two hosts setup in a Hyper-V cluster. Both hosts are setup exactly the same and both are setup with a CSV mapped to an external storage array. Both hosts are connected to the storage array using external SAS connections
via HBA’s installed in each server. The cluster passes the cluster validation test. I can successfully live migrate VM’s between the hosts, etc.

The host that’s designated the owner of the CSV can read and write directly to the storage array via the external SAS connection, as expected. The non-CSV owner can access the CSV as well, but all read/writes do NOT cross the external SAS interface directly
to the storage appliance and instead all I/O goes out the network interface to the other host (CSV owner) and then to the storage array via the other host.

If I change the CSV owner to the malfunctioning host, the problem reverses. It seems that whichever host is the non-owner of the CSV will not directly send storage I/O to the storage appliance via the SAS connection and instead sends the traffic across the
network to the CSV owner.

Failover Cluster Manager on both hosts shows the cluster shared volume. The CSV is also mapped to both hosts at C:ClusterStorageVolume1. If I copy a file from the local host to the CSV on the CSV owner, the file transfers directly to the storage appliance
via the external SAS interface at roughly 12Gbps (SAS interface speed). If I copy a file from the non-owner CSV host directly to the mapped CSV, the storage I/O goes out the network interface to the other host at approx. 2-3Gbps and then the other host (owner)
sends the storage I/O to the storage appliance.

The network adapter that’s being used for storage I/O is the cluster only interface on the 192.168.1.0/24 network. 

Hardware Setup:
Host 1
External SAS connection to the storage array
MPIO enabled
Two 1Gbps NIC’s teamed for guest traffic to the corporate network - 10.13.0.0/20
One 1Gbps NIC for management traffic 10.13.0.0/20
One 10Gbps NIC for cluster only - live migrations - 192.168.1.0/24

Host 2
External SAS connection to the storage array
MPIO enabled
Two 1Gbps NIC’s teamed for guest traffic to the corporate network - 10.13.0.0/20
One 1Gbps NIC for management traffic 10.13.0.0/20
One 10Gbps NIC for cluster only - live migrations - 192.168.1.0/24

Read full post . . . .

Cluster Shared Volume (CSV) Host Issue

I have a pre-production environment with two hosts setup in a Hyper-V cluster. Both hosts are setup exactly the same and both are setup with a CSV mapped to an external storage array. Both hosts are connected to the storage array using external SAS connections
via HBA’s installed in each server. The cluster passes the cluster validation test. I can successfully live migrate VM’s between the hosts, etc.

The host that’s designated the owner of the CSV can read and write directly to the storage array via the external SAS connection, as expected. The non-CSV owner can access the CSV as well, but all read/writes do NOT cross the external SAS interface directly
to the storage appliance and instead all I/O goes out the network interface to the other host (CSV owner) and then to the storage array via the other host.

If I change the CSV owner to the malfunctioning host, the problem reverses. It seems that whichever host is the non-owner of the CSV will not directly send storage I/O to the storage appliance via the SAS connection and instead sends the traffic across the
network to the CSV owner.

Failover Cluster Manager on both hosts shows the cluster shared volume. The CSV is also mapped to both hosts at C:ClusterStorageVolume1. If I copy a file from the local host to the CSV on the CSV owner, the file transfers directly to the storage appliance
via the external SAS interface at roughly 12Gbps (SAS interface speed). If I copy a file from the non-owner CSV host directly to the mapped CSV, the storage I/O goes out the network interface to the other host at approx. 2-3Gbps and then the other host (owner)
sends the storage I/O to the storage appliance.

The network adapter that’s being used for storage I/O is the cluster only interface on the 192.168.1.0/24 network. 

Hardware Setup:
Host 1
External SAS connection to the storage array
MPIO enabled
Two 1Gbps NIC’s teamed for guest traffic to the corporate network - 10.13.0.0/20
One 1Gbps NIC for management traffic 10.13.0.0/20
One 10Gbps NIC for cluster only - live migrations - 192.168.1.0/24

Host 2
External SAS connection to the storage array
MPIO enabled
Two 1Gbps NIC’s teamed for guest traffic to the corporate network - 10.13.0.0/20
One 1Gbps NIC for management traffic 10.13.0.0/20
One 10Gbps NIC for cluster only - live migrations - 192.168.1.0/24

Read full post . . . .

Disable SSL & Early TLS on Windows Server 2012 R2 Running ARR

Hi All,

I wanna get some confirmation. I am in the midst of getting a PCI DSS compliance and one of the requirement is to disable SSL and Early TLS on our servers. I have 2 ARR Servers and want to know if there are any major impact if I disable these 2 components.
Is there any proper way available to disable them with less impact to my environment?

FYI, my environment is currently running Microsoft Azurepack with SCVMM, SCOM and SCSM.

Thanks and Regards,

Arieff Majid

Read full post . . . .

Upgrade Windows Server 2012 R2 Datacenter to 2016 Datacenter

Dear Team,

I am using windows server 2012 r2 datacenter edition over there using hyper-v. There are lots of VM over hyper-v and want to upgrade windows server 2016 datacenter edition

Is it possible to upgrade without losing any data like we are using on an AD, IIS on VM of Hyper 

After upgrading, we want the same data

As we use the metrology the option keeps files and data is greyed out.

Thanks and Regards

Vipin Jeswani

Read full post . . . .

Go Que Newsroom

Categories