vSphere

Announcing VMware Skyline Health for vSphere & vSAN

Meeting Our Customers Where They Are VMware unifies its proactive support technology into a portfolio of complementary services, surfacing the information you need, where and when you need it. Joining forces under the VMware Skyline brand, VMware Skyline Health for vSphere and vSAN are available in the vSphere Client, enabling a native, in-product experience with consistent proactive analytics

The post Announcing VMware Skyline Health for vSphere & vSAN appeared first on VMware Support Insider.

Continue reading..

Part 1: Architect and Create Cloud Services Organizations for Skyline

In November of 2018, VMware Skyline became available as a VMware Cloud Service. The primary reason we enabled Skyline as a Cloud Services was to provide customer’s with greater control of their Skyline proactive findings and recommendations. This is part one of a multi-part blog series that will help you architect and create your Cloud

The post Part 1: Architect and Create Cloud Services Organizations for Skyline appeared first on VMware Support Insider.

Continue reading..

VMware Skyline Update – February 2019

Read full post . . . or http://feedproxy.google.com/~r/VmwareKnowledgebaseBlog/~3/24D1_X1kAS0/vmware-skyline-update-february-2019.html

We’re back with our February VMware Skyline update. We didn’t release any major enhancements to Skyline this month, however we did add additional proactive findings, as well as updated our documentation to reflect recent changes. Let’s get started by reviewing the proactive findings released in February. For those who are wondering what Skyline is, Skyline

The post VMware Skyline Update - February 2019 appeared first on VMware Support Insider.

Introducing DRS DumpInsight

In an effort to provide a more insightful user experience and to help understand how vSphereDRS works, we recently released a fling:DRS Dump Insight.

DRS Dump Insight is a service portal where users can upload drmdump files and it provides a summary of the DRS run, with a breakup of all the possible moves along with the changes in ESX hosts resource consumption before and after DRS run.

Users can get answers to questions like:

  • Why did DRS make a certain recommendation?
  • Why is DRS not making any recommendations to balance my cluster?
  • What recommendations did DRS drop due to cost/benefit analysis?
  • Can I get all the recommendations made by DRS?

Once the drmdump file is uploaded, the portal provides a summary of all the possible vMotion choices DRS went through before coming up with the final recommendations.

The portal also enables users to do What-If analysis on their DRS clusters with options like:

  • Changing DRS Migration Threshold
  • Dropping affinity/anti-affinity rules in the cluster
  • Changing DRS advanced options

 

The post Introducing DRS DumpInsight appeared first on VMware VROOM! Blog.

Read more..

To be “RDM for Oracle RAC”, or not to be, that is the question

Famous words from William Shakespeare’s play Hamlet. Act III, Scene I.

This is true even in the Virtualization world for Oracle Business Critical Applications where one wonders which way to go when it comes to provisioning shared disks for Oracle RAC disks, Raw Device Mappings (RDM) or VMDK ?

Much has been written and discussed about RDM and VMDK and this post will focus on the Oracle RAC shared disks use case.

Some common questions I get talking to our customer who are embarking on the virtualization journey for Oracle on vSphere are

  • What is the recommended approach when it comes to provisioning storage for Oracle RAC or Oracle Single instance? Is it VMDK or RDM?
  • What is the use case for each approach?
  • How do I provision shared RDM (s) in Physical or Virtual Compatibility mode for an Oracle RAC environment?
  • If I use shared RDM (s) (Physical or Virtual) will I be able to vMotion my RAC VM &#rsquo;s without any cluster node eviction?

We recommend using shared VMDK (s) with multi-writer setting for provisioning shared storage for Oracle RAC environments so that one can take advantage of all the rich features vSphere as a platform can offer which includes

  • better storage consolidation
  • manage performance
  • increases storage utilization
  • provides better flexibility
  • easier administration and management
  • use features like SIOC (Storage IO control)

For setting multi-writer flag on classic vSphere, refer to KB article &#rsquo;Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag (1034165)&#rdquo;

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1034165

For setting multi-writer flag on vSAN, refer to KB article &#rsquo;Using Oracle RAC on a vSphere 6.x vSAN Datastore (2121181)&#rdquo;

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2121181

However, there still are some cases where it makes more sense to use RDM storage access over vmdk:

  • Migrating an existing application from a physical environment to virtualization
  • Using Microsoft Cluster Service (MSCS) for clustering in a virtual environment
  • Implementing N-Port ID Virtualization (NPIV)

This is well explained in the white paper &#rsquo;VMware vSphere VMFS Technical Overview and Best Practices&#rdquo; for version 5.1

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vmware-vsphere-vmfs-best-practices-whitepaper.pdf

Difference between Physical compatibility RDMs (rdm-p) and Virtual compatibility RDMs (rdm-v) can be found in the KB 2009226.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009226

Back to the topic of Oracle RAC shared disks and to rdm or not to rdm !!

For those customers whose requirement falls in the list of used cases above where the need is to deploy Oracle RAC with shared RDM disks, yes, RDM &#rsquo;s can be used as shared disks for Oracle RAC clusters on classic vSphere. VMware vSAN does not support Raw Device Mappings (RDMs) at the time of writing this blog.

Important caveats to keep in mind about shared RDM &#rsquo;s and vMotion:

  • There was never an issue vMotioning VMs with RDM &#rsquo;s from one ESXi server to another, it has worked in all versions of vSphere including the latest release
  • The only issue was with shared RDM (s) for Clustering. vMotion of VMs with shared RDMs requires virtual hardware 11 or higher i.e VMs must be in &#rsquo;Hardware 11&#rdquo; compatibility mode – which means that you are either creating and running the VMs on ESXi 6.0 hosts, or you have converted your old template to Hardware 11 and deployed it on ESXi 6.0.
  • The VMs must be configured with this above hardware version at minimum for shared RDM vMotion to take place

VMware products and their virtual hardware version table can be found below:

 

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746

As per the table above, with vSphere 6.0 and above, for VMs with virtual hardware version 11 or higher, the restriction for shared RDM vMotion has been lifted.

Now that we are aware of the above restrictions, lets add a shared RDM in Physical compatibility mode to 2 VMs as part of an Oracle RAC installation and see if we can vMotion the 2 VMs without any failure.

The high-level steps are

  • add a shared RDM in Physical compatibility mode (rdm-p) to 2 VMs which are part of an Oracle RAC installation
  • Use Oracle ASMLIB to mark those disk as ASM disks
  • vMotion the 2 VMs from one ESXi server to another and see if we encounter any issues

There are 2 points to keep in mind when adding shared RDM (s) in physical compatibility mode to a VM.

  • The SCSI controller where the shared rdm (s) will added to, the &#rsquo;SCSI Bus Sharing&#rdquo; needs to be set to &#rsquo;Physical&#rdquo;
  • The &#rsquo;Compatibility mode for the&#rdquo; shared RDM needs to set to &#rsquo;Physical&#rdquo; for Physical Compatibility (rdm-p)

The SCSI Bus sharing can be set to either of the 3 options ( None, Physical & Virtual ) as per the table below.

&#rsquo;SCSI Bus sharing&#rdquo; would be set to

  • &#rsquo;Physical&#rdquo; for clustering across ESXi servers
  • &#rsquo;Virtual&#rdquo; for clustering within an ESXi server

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-4FB34475-018B-43B7-9E33-449F496F5AB4.html

The 2 VMs are rdmrac1 (10.128.138.1) and rdmrac2 (10.128.138.2) . Both VMs are running Oracle Enterprise Linux 7.4.

[root@rdmrac1 ~]# uname -a
Linux rdmrac1 4.1.12-94.5.7.el7uek.x86_64 #2 SMP Thu Jul 20 18:44:17 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@rdmrac1 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.4
[root@rdmrac1 ~]#

The RDM (WWN 60:06:01:60:78:70:42:00:5C:98:8B:59:85:DC:51:A7) in physical compatibility mode that we would use is highlighted below:

Following are the steps:

1) Add a new controller to VM &#rsquo;rdmrac1&#rdquo;. Recommendation is use PVSCSI for Controller type. Set the controller &#rsquo;SCSI Bus Sharing&#rdquo; to &#rsquo;Physical&#rdquo;. Do the same for VM &#rsquo;rdmrac2&#rdquo; also.

2) Add the RDM disk to the first VM &#rsquo;rdmrac1&#rdquo;.

 

3) Pick the correct RDM device (WWN 60:06:01:60:78:70:42:00:5C:98:8B:59:85:DC:51:A7)

4) Set RDM &#rsquo;Compatibility Mode&#rdquo; to &#rsquo;Physical&#rdquo; as shown. Please make a note of the SCSI ID to which you have attached the disk. You will use the same ID for this disk when attaching it to the other VM “rdmrac2” which will be sharing this disk. In this case we used SCSI 1:0.

5) Add the existing hard disk ( same RDM disk ) to VM “rdmrac2” to the new PVSCSI controller.

6) Add the existing RDM disk to the same SCSI channel we used in step 4 which was SCSI 1:0 (same as in the case of rdmrac1).

7) Now the shared rdm is added to both VMs &#rsquo;rdmrac1&#rdquo; and &#rsquo;rdmrac2&#rdquo;. Next order of business is to format the raw disk.

Oracle ASMLIB requires that the disk be partitioned, you can use the raw device without partitioning as is if you are using Linux UDEV for Oracle ASM purposes.

Partitioning is a good practice anyways to prevent anyone from attempting to create a partition table and file system on any raw device he gets his hands on which will lead to issues if the device is being used by ASM.

Format the disks

[root@rdmrac1 ~]# fdisk -lu

Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 33553920 bytes
..
[root@rdmrac1 ~]#

Use the partitioning weapon of your choice (fdisk, parted, and gparted) , I used fdisk below to partition with default alignment offset.

[root@rdmrac1 ~]# fdisk -lu

Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 33553920 bytes
Disk label type: dos
Disk identifier: 0xddceaeeb

Device Boot Start End Blocks Id System
/dev/sdc1 2048 209715199 104856576 83 Linux
[root@rdmrac1 ~]#

Scan the SCSI bus using operating system commands on &#rsquo;rdmrac2&#rdquo;

[root@rdmrac2 ~]# fdisk -lu
….
Disk /dev/sdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 33553920 bytes
Disk label type: dos
Disk identifier: 0xddceaeeb

Device Boot Start End Blocks Id System
/dev/sdc1 2048 209715199 104856576 83 Linux
[root@rdmrac2 ~]#

Install Oracle ASMLIB rpm as usual and marked the new rdm disk as ASM disk

[root@rdmrac1 software]# /usr/sbin/oracleasm createdisk DATA_DISK01 /dev/sda1
Writing disk header: done
Instantiating disk: done
[root@rdmrac1 software]#

[root@rdmrac1 software]# /usr/sbin/oracleasm listdisks
DATA_DISK01
[root@rdmrac1 software]#

[root@rdmrac2 ~]# /usr/sbin/oracleasm listdisks
DATA_DISK01
[root@rdmrac2 ~]#

8) Now for the vMotion test that we have been waiting for. VM &#rsquo;rdmrac1&#rdquo; is on host 10.128.136.118 and VM &#rsquo;rdmrac2&#rdquo; is on host 10.128.136.117.

vMotion &#rsquo;rdmrac1&#rdquo; from host 10.128.136.118 to 10.128.136.119.

Simultaneously, you can choose to vMotion VM &#rsquo;rdmrac2&#rdquo; from 10.128.136.117 to 10.128.136.118.

Start vMotion of &#rsquo;rdmrac1&#rdquo; from host 10.128.136.118 to 10.128.136.119

While the vMotion is taking place, perform ping test by pinging VM &#rsquo;rdmrac1&#rdquo; from VM &#rsquo;rdmrac1&#rdquo;

[root@rdmrac2 ~]# ping 10.128.138.1
PING 10.128.138.1 (10.128.138.1) 56(84) bytes of data.
64 bytes from 10.128.138.1: icmp_seq=2 ttl=64 time=0.288 ms
64 bytes from 10.128.138.1: icmp_seq=3 ttl=64 time=0.296 ms
64 bytes from 10.128.138.1: icmp_seq=4 ttl=64 time=0.288 ms

64 bytes from 10.128.138.1: icmp_seq=8 ttl=64 time=0.293 ms
64 bytes from 10.128.138.1: icmp_seq=9 ttl=64 time=0.242 ms
64 bytes from 10.128.138.1: icmp_seq=18 ttl=64 time=0.252 ms
64 bytes from 10.128.138.1: icmp_seq=19 ttl=64 time=0.279 ms
64 bytes from 10.128.138.1: icmp_seq=20 ttl=64 time=0.256 ms
64 bytes from 10.128.138.1: icmp_seq=21 ttl=64 time=0.422 ms
..
64 bytes from 10.128.138.1: icmp_seq=23 ttl=64 time=0.675 ms <- Actual Cutover
..
64 bytes from 10.128.138.1: icmp_seq=24 ttl=64 time=0.295 ms
64 bytes from 10.128.138.1: icmp_seq=25 ttl=64 time=0.251 ms
64 bytes from 10.128.138.1: icmp_seq=26 ttl=64 time=0.246 ms
64 bytes from 10.128.138.1: icmp_seq=27 ttl=64 time=0.177 ms

^C
— 10.128.138.1 ping statistics —
53 packets transmitted, 52 received, 1% packet loss, time 52031ms
rtt min/avg/max/mdev = 0.177/0.297/0.902/0.107 ms
[root@rdmrac2 ~]#

At the end of the vMotion operation, the VM &#rsquo;rdmrac1&#rdquo; is are now on a different host without experiencing any network issues.

Yes, the most conclusive test would be to have a fully functional RAC running and see if we have any cluster node evictions or disconnects of RAC sessions. I have performed those tests as well and have not encountered any issues.

In case you were wondering what steps needed to be taken to add a shared RDM in virtual compatibility mode

  • The SCSI controller where the shared rdm (s) will added to, the &#rsquo;SCSI Bus Sharing&#rdquo; needs to be set to &#rsquo;Physical&#rdquo; (cluster across any ESXi servers)
  • The &#rsquo;Compatibility mode for the&#rdquo; shared RDM needs to set to &#rsquo;Virtual&#rdquo; for Virtual Compatibility (rdm-v)

In this case, I used the same 2 VMs, &#rsquo;rdmrac1&#rdquo; and &#rsquo;rdmrac2&#rdquo;.

  • added a new SCSI Controller of Type Paravirtual (SCSI 2)
  • added the shared RDM in virtual compatibility mode (new device with WWN 60:06:01:60:78:70:42:00:E3:98:8B:59:4D:4A:BC:F9) at the same SCSI position (SCSI 2:0) for both the 2 VMs.

The steps for adding the rdm-v&#rsquo;s are the same as shown above for rdm-p&#rsquo;s.

VM &#rsquo;rdmrac1&#rdquo; showing SCSI 2 Paravirtual Controller with shared rdm in Virtual compatibility mode at SCSI 2:0 position.

VM &#rsquo;rdmrac2&#rdquo; showing SCSI 2 Paravirtual Controller with shared rdm in Virtual compatibility mode at SCSI 2:0 position.

Same vMotion with ping test was done and no issues were observed.

Key points to keep in mind:

  • vMotion of shared rdm (s) is possible in vSphere 6.0 and above as long as the VMs are in &#rsquo;Hardware 11&#rdquo; compatibility mode
  • Best Practices needs to be followed when configuring Oracle RAC private interconnect and VMware vMotion network which can be found in the &#rsquo;Oracle Databases on VMware – Best Practices Guide&#rdquo;

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-oracle-databases-on-vmware-best-practices-guide.pdf

All Oracle on vSphere white papers including Oracle licensing on vSphere/vSAN, Oracle best practices, RAC deployment guides, workload characterization guide can be found in the url below

Oracle on VMware Collateral – One Stop Shop
https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html

The post To be &#rsquo;RDM for Oracle RAC&#rdquo;, or not to be, that is the question appeared first on Virtualize Business Critical Applications.

Read more..

VMware vSphere delivers greater out-of-the-box VM density than Red Hat Virtualization

vSphere supported 160% as many VMs with out-of-the-box settings.

A third-party study shows that VMware® vSphere® 6.5 supports more virtual machines with out-of-the-box settings than Red Hat Virtualization 4.1.

Higher density, lower CapEx

The higher the VM density, the lower the per VM capital costs. This is a key advantage VMware vSphere provides to customers. Using advanced memory management VMware vSphere easily assigned more virtual memory to VMs than the total physical memory available. Red Hat Virtualization had trouble with this routine operation.

The study also showed that operators can rely on vSphere memory management to keep critical applications running even when hardware fails unexpectedly. Red Hat KVM struggled to keep VMs powered on and running. VMware vSphere worked without hesitation and without any admin tuning.

How we tested

Principled Technologies (PT), an independent testing facility, used hands-on testing to investigate virtual machine density. PT compared VMware vSphere 6.5 and Red Hat Virtualization (RHV) 4.1 running Microsoft® SQL Server® 2016 VMs on a Lenovo™ System x3650 M5 rack server. PT first determined how many VMs each hypervisor could power on and run an online transaction processing workload. Then they increased the number of VMs to test the range each virtualization platform could support. PT used default hypervisor memory management settings. Using these out-of-the-box settings, vSphere was able to power on and run 160% more VMs than it—or RHV—could run without memory over commitment.

Additional testing showed that during a simulated hardware failure vSphere also kept VMs available without administrator intervention. RHV required an admin to enable Memory Optimization and perform manual tuning to overcommit memory. Even then, RHV could power on and run only 107 times its baseline number of VMs and could not deliver high availability.

This study offers further confirmation that VMware vSphere continues to be the most reliable, trusted and cost-effective virtualization technology on the market. You can read the entire report here.

The post VMware vSphere delivers greater out-of-the-box VM density than Red Hat Virtualization appeared first on Virtual Reality.

Read more..

Now Available: vSphere 6.5 Update 1

VMware has announcedthe general availability of vSphere 6.5 Update 1, the first major update to the well-received vSphere 6.5 that was released in November of 2016. With this update release, VMware builds upon the already robust industry-leading virtualization platform and further improves the IT experience for its customers.

vSphere 6.5 focuses on 4 areas of innovation directly targeted at the challenges customers face as they digitally transform their businesses:

  • Simplified customer experience – Re-architected vCenter Server Appliance, streamlined HTML5-based GUI, and simple rest-based APIs for automation.
  • Comprehensive Built in Security – Policy-driven security at scale to secure data, infrastructure, and access.
  • Universal App Platform – A single platform to support any application, anywhere.
  • Proactive Data Center Management – Predictive analytics to address potential issues before they become serious problem.

Additional support and enhancements found in vSphere 6.5 Update 1 include:

  • vSphere Client Now Supports 90% of General Workflows
  • vCenter Server Foundation Now Support 4 Hosts
  • vSphere Support and Interoperability Across Ecosystems
  • vSphere 6.5 General Support Has Been Extended
  • Upgrade from vSphere 6.0 Update 3 Now Supported

Visit the vSphere blog to learn more about the new features included in vSphere 6.5 Update 1.

A recap of key capabilities offered in vSphere 6.5 can be found here.

The post Now Available: vSphere 6.5 Update 1 appeared first on VMware Tech Alliances (TAP) Blog.

Read more..

Streamlining Oracle on SDDC – VMworld 2017

Interested to find out how to streamline your Business Critical Applications on VMware Software-Defined Datacenter (SDDC) seamlessly?

Come attend our session at VMworld 2017 Las Vegas on Thursday, Aug 31, 1:30 p.m. - 2:30 p.m. where Amanda Blevins and Sudhir Balasubramanian will talk about the end to end life cycle of an Application on VMware SDDC.

This includes provisioning, management, monitoring, troubleshooting, and cost transparency with the vRealize Suite. The session will also include best practices for running Oracle databases on the SDDC including sizing and performance tuning. Business continuity requirements and procedures will be addressed in the context of the SDDC. It is a formidable task to ensure the smooth operation of critical applications running on Oracle, and the SDDC simplifies and standardizes the approach across all datacenters and systems.

The post Streamlining Oracle on SDDC - VMworld 2017 appeared first on Virtualize Business Critical Applications.

Read more..

Oracle on vSAN HCI – VMworld 2017

Interested to find out how VMware HCI vSAN solution provides high availability, workload balancing, seamless site maintenance, stability, resilience, performance and cost effective hardware required to meet critical business SLA’s for running mission critical workloads?

Come attend our session at VMworld 2017 Las Vegas on Wednesday, Aug 30, 2:30 p.m. - 3:30 p.m. where Sudhir Balasubramanian and Palanivenkatesan Murugan will talk about the VMware HCI vSAN solution for Mission Critical Oracle workloads

This session will showcase deployment of Oracle Clustered and Non Clustered databases along with running IO intensive workloads on vSAN and also talk about seamlessly running database day 2 operations like Backup & Recovery, Database Cloning , Data Refreshes , Database Patching etc using vSAN capability.

The post Oracle on vSAN HCI - VMworld 2017 appeared first on Virtualize Business Critical Applications.

Read more..

Three Extreme Performance Talks from the Office of the CTO at VMworld USA

The Office of the CTO will be presenting three talks in the unofficial “Extreme Performance” series at the upcoming VMworld 2017 conference in Las Vegas. In addition, one of these talks will be delivered at VMworld Europe in Barcelona. Each of these talks focuses on important aspects of pushing the envelope to achieve high performance […]

The post Three Extreme Performance Talks from the Office of the CTO at VMworld USA appeared first on VMware | OCTO Blog.

Read more..

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 53 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1995