Cloud Operations

IT Teams Need To Finally Implement Self-Service Automation

Seven things, including Self-Service Automation, that IT teams must do to retain relevance with end users.

I was talking with an industry analyst (think Gartner, Forrester, IDC) the other day around a broad range of trends impacting IT. Somehow we got onto a discussion around the issue of IT teams losing relevance with their line of business customers. I can&#rsquo;t recall the conversation exactly but it went something like this. &#rsquo;David, I talk with different IT teams every week and many of them ask me the same question: &#rsquo;What can we as IT do to be more relevant to our end users [line of business]?&#rdquo;.

Jim (not his real name) told me that the first question he asks these teams in response is &#rsquo;Are you offering your customers self-service?&#rdquo;. This analyst told me that the answer he hears back most often is &#rsquo;no, we haven&#rsquo;t gotten to that yet&#rdquo;. Jim then goes on to advise these teams to A) leverage an automated approach to service delivery to speed up resource delivery (if they are not already doing so); and B) be sure to also implement self-service that makes it drop dead easy for end users to get the services they want.

If you think about it, not implementing self-service is denying the reality that line of business partners have choices beyond enterprise IT. It also fails to recognize that increasingly our expectations of how things should work, at work, are shaped by our personal and consumer experiences. Self-service and the near instant gratification that comes from it just makes more sense today than submitting tickets and waiting weeks for resources to be available for your next critical project.

My Top &#rsquo;X&#rdquo; List For IT

This exchange got me thinking about the big-ticket items that most IT teams must tackle to be more relevant to their end users. If the # 1 thing that IT teams must do to retain or regain relevance is embrace self-service; what does a top ten list look like? Sorry to disappoint but I don&#rsquo;t have a top ten list. There are however some things that I feel do stand apart from the rest of the pack when it comes to looking at how IT operates. So, in that spirit here is my list of the top seven things IT must do to remain relevant.

1. Implement Self Service for Resource Requests
2. Market IT Services to your End Users
3. Enable Infrastructure as Code
4. Become an IT Developer
5. Begin to Think about Multi-Cloud Networking
6. Go Beyond Infrastructure and Deliver Complete Stacks
7. Help App Dev Teams Move Containers to Production

There are undoubtedly other things that IT teams can do that would increase their relevance to line-of-business (LOB) partners. Having said that, I do think this is a pretty good list to start with. There&#rsquo;s too much here to cover in a single blog so I&#rsquo;ll elaborate on each of these in this blog and several others that will follow. Hopefully, along the way I will provide you enough insight on each to give you a good idea of what it is that IT must do along with some additional thoughts on how to get it done.

Starting with Self Service

According to Wikipedia and depending on how you look at it, Amazon Web Serviceshas been around since 2002 or 2006. Early adopters flocked to it because of two reasons in my opinion. The first reason was an ability to get infrastructure fast. The second reason was the ability to get these resources without having to file one or more tickets with the help desk.

Today, implementing the ability to get end users resources fast is simply a matter of automation. Many organizations have adopted automation to dramatically speed up the provisioning of infrastructure resources. Application level resources is a different matter but we&#rsquo;ll cover that elsewhere.

I have first-hand experience talking with many IT teams who used to take 4 or more weeks to provision resources in the past but now routinely do it in under in under thirty minutes. Of course, with Amazon you can get those resources in just a few minutes, so taking 30 minutes or so is still longer than what it would take using AWS. But let&#rsquo;s be honest - how many developers find out about a project and then need to be coding it 5 minutes later? Thirty minutes is plenty fast for most needs.

While many organizations have, or are in the process of adopting automation to speed up service delivery, not nearly as many have implemented self-service as part of that process. Many still rely on existing request fulfilment processes that existed before automation was implemented. The most common example of this is organizations using Service Now for requesting resources, which in turn generates a ticket to the platform automation team which then initiates an automated process to fulfill the request.

Leveraging an existing ticketing process isn&#rsquo;t necessarily a bad approach and there are some good reasons for doing it. The main reason that I am aware of is that this approach means that any existing process for determining who has access to specific resources doesn&#rsquo;t need to be re-codified into the automation that supports self-service.

That&#rsquo;s not a bad reason to keep the existing process, but remember that if you are an internal IT team, your competing with the public cloud and on the public cloud – self-service means self-service. No tickets and no help desk. So, going the extra mile to enable true self service where entitlements and other forms of governance are matched between users and resources might be worth it for your IT team given the world we live and compete in.

Now a few caveats around the idea of self-service. Different end users have different needs. Many end users are perfectly happy selecting resources from a pre-populated catalog. VMware vRealize Automationis a great example of an automation platform that supports this model of self-service.

In this model, blueprints can be created to represent everything from a single machine to a complex, multi-tier application, with storage, networking, security and even monitoring agents all represented in the blueprint. These blueprints then become catalog items that once selected by end user are instantiated in near real time.

Other users might prefer a self-service model that is closer to what they would experience on Amazon. This model is declarative in nature and resources are requested either through a CLI or through an API (using scripts or through another tool) in the form of small building blocks that represent infrastructure elements such as compute, storage, or network. For IT teams looking for such a model to satisfy their end users, VMware Integrated OpenStack(VIO) might be the best choice for a service delivery automation platform.

A hybrid model might be the best choice for others. In this model vRealize Automation is used to offer VM level resources from a catalog but it is also used to reserve resources for a VIO based developer cloud that an App Dev team would like to implement. In this model vRealize Automation would also be used to provision the components necessary to instantiate a VIO based Developer Cloud for that same App Dev team.

Just for completeness, I should point out that vRealize Automation can also support the idea of blueprints as code, where blueprints are created or modified using YAML. These blueprints can then be imported into vRealize Automation and offered to end users through the catalog. These same blueprints can of course be exported as YAML as well.

The Right Self-Service Model for Your End Users

Hopefully you can see that solutions to the self-service problem exist along a continuum. Figuring out what type of self-service model to implement is very much a function of understanding your users. There are different approaches and you won&#rsquo;t be sure which approach makes the most sense unless you are actively engaged in understanding the needs of your users.

Having a deep understanding of what your end users need is also the prerequisite for our next &#rsquo;must do item&#rdquo; which is effectively marketing what you do offer to your end users. More to come on that in the next installment of this series.

Learn More

• Visit VMware vRealize productpage
• Visit VMware Integrated OpenStack product page
• Try our Automate ITHands-on Labs
• Try our Intelligent OperationsHands-on Labs

The post IT Teams Need To Finally Implement Self-Service Automation appeared first on VMware Cloud Management.

Read more..

Getting Insights into Configuration & Compliance with vROps 6.6 Dashboards

In my last post, I discussed the Capacity & Utilization use cases and how vRealize Operations Manager 6.6 can answer those questions with the help of out of the box dashboards. We also looked at how the Getting Started Page can be used to dive into each of these use cases.

In this post we will talk aboutConfiguration & Compliance.The Configuration and Compliance category caters to the administrators who are responsible to manage configuration drifts within a virtual infrastructure. Since most of the issues in a virtual infrastructure are a result of inconsistent configurations, dashboards in this category highlight the inconsistencies at various levels such as Virtual Machines, Hosts, Clusters and Virtual Networks. You can view a list of configuration improvements that helps you to avoid problems that are caused because of misconfigurations.

Here is how Configuration & Compliance shows up on the Getting Started Page:

 

Key questions these dashboards help you answer are :

 

1- Are the vSphere clusters consistently configured for high availability and optimal performance?

2- Are the ESXi hosts consistently configured and available to use?

3- Are the Virtual Machines sized and configured as per recommended best practices?

4- Are virtual switches configured optimally?

5- Is the environment configured in accordance with the vSphere Hardening Guide?

 

Let us look at each of these dashboard and I will provide a summary of what these dashboards can do for you along with a quick view of the dashboard:

Cluster Configuration

The Cluster Configuration Dashboard provides you a quick overview of your vSphere cluster configurations. It highlights the areas which are important to deliver performance and availability to your virtual machines. The dashboard quickly highlights if there are clusters which are not configured for DRS, HA or Admission Control to avoid any resource bottlenecks or availability issues in case of a host failure.

The heatmap on this dashboard, quickly identifies if you have hosts where vMotion was not enabled as this would not allow the VMs to move from or to that host. This could cause potential performance issues on the VMs living on that host if the host gets too busy. The dashboard also provides you a quick view of how consistently your clusters are sized and whether the hosts on each of those clusters are consistently configured.

The Cluster Properties view in this dashboard allows you to easily report on all these parameters by simply exporting the data and share the same with relevant stakeholders within your organization.

To see this dashboard in actionclick here

 

Host Configuration

The Host Configuration dashboard provides you a quick overview of your ESXi host configurations and capture inconsistencies to take corrective actions. Along with configurations, the dashboard measures the ESXi hosts against the vSphere best practices and calls out if it finds a deviation which can impact the performance or availability of your virtual infrastructure.

While you can always view this data using the dashboards, the ESXi Configuration view on this page allows you to export this data and share the same with administrator responsible to manage the hosts.

To see this dashboard in actionclick here

 

Network Configuration

The Network Configuration dashboard provides a detailed view of virtual switch configuration and utilization. On selecting a virtual switch you can see the list of ESXi hosts, DV port Groups and virtual machines which are being served by the select switch.

You can easily identify any misconfigurations within various network components by reviewing the properties listed in the views within the dashboard. The drill down to the virtual machine levels allows you to track important information such as IP address and MAC address assigned to the virtual machines.

A network administrator can use this dashboard to get a visibility into the virtual infrastructure network configuration.

To see this dashboard in actionclick here

 

VM Configuration

The Virtual Machine Configuration dashboard focuses on highlighting the key configurations of the virtual machines in your environment. The goal of this dashboard is to help you find inconsistencies of configuration within your virtual machines in order to take quick remediation measures. This helps you safeguard the applications which are hosted on these virtual machines by avoiding potential issues due to misconfigurations.

Some of the basic issues the dashboard focuses on includes identifying VMs running on older VMware tools versions, VMware tools not running or virtual machines running on large disk snapshots. VMs with such symptoms can lead to potential performance issues and hence it is important to ensure that they do not deviate from the defined standards.

This dashboard is complimented with an out of the box report named “Virtual Machine Inventory Summary” which can be used to report the configurations highlighted on this dashboard for quick remediation.

To see this dashboard in actionclick here

 

vSphere Hardening Compliance

The vSphere Hardening Compliance dashboard measures you environment against the vSphere Hardening Guide and lists down the objects which are non-compliant. You can see the trend of High Risk, Medium Risk and Low Risk violations and see the overall compliance score of your virtual infrastructure.

The dashboard also allows you to drill down into various components to check compliance for your ESXi hosts, Clusters, Port Groups and virtual machines using heatmaps.

Each non-compliant object is listed in the dashboard with recommendations on remediation required to secure your virtual infrastructure.

To see this dashboard in actionclick here

 

Hope this post will give you insights on how each of these dashboards can help you ensure that your environments are configured as per VMware best practices and comply to the security standards set by your organization.

 

The post Getting Insights into Configuration & Compliance with vROps 6.6 Dashboards appeared first on VMware Cloud Management.

Read more..

PCI-DSS & HIPAA compliance with vRealize Operations

Hardening and Compliance for vSphere

For some time now vRealize Operations has been able to check the vSphere environment against VMware&#rsquo;s vSphere Hardening Guidelines – vRealize Operations vSphere Hardening.

 

More and more organizations have the need to meet certain regulatory requirements, namely PCI-DSS, HIPAA, and others. With the recent release of vRealize Operations 6.6 VMware has also introduced PCI-DSS and HIPAA compliance for vSphere. This is available to clients with vRealize Operations Advanced edition and higher.

 

 

Download and Install the Management Packs for PCI-DSS and HIPAA

Lets start by where you need to go to get this content. Simply go to VMware&#rsquo;s MarketPlace (also known as VMware Solution Exchange) https://marketplace.vmware.com. A simple search on PCI-DSS or HIPAA will get you to the vRealize Operations Management Packs.

 

Install the Management Pack(s) you desire. This is done in the ADMINISTRATION page under SOLUTIONS

 

Enable PCI-DSS and HIPAA compliance for vSphere

Now that the solution management packs are installed simply make sure they are turned on. This is done in the policy by enabling the alerts. Go to step 6 in the policy, and do two searches, the first for PCI DSS and the second for HIPAA

Change the STATE column from &#rsquo;Inherited Blocked&#rdquo; to &#rsquo;Local Enabled&#rdquo; to enable the alerts (essentially enabling the compliance checking)

 

Leveraging the vSphere Hardening Compliance dashboard you will now be able to see any alerts related to PCI DSS and HIPAA in addition to the already available (if turned on) vSphere compliance alerts.

 

 

Object Level View

From here you can also drill into an object check on it’s compliance posture!

 

 

Reports

After installing these solutions Management Packs you will notice that each has installed a compliance report. One for PCI-DSS and the second for HIPAA. This is a great way to check on your compliance posture and make sure that you are trending upwards with time (getting to PCI and HIPAA compliance doesn&#rsquo;t happen over night). Here’s a report snippet below.

 

 

 

vRealize Operations Current Standards Coverage

  • vSphere Hardening Guidelines for 5.5
  • vSphere Hardening Guidelines for 6.0
  • PCI DSS 3.2 for vSphere (as of July 2017 – download the management pack)
  • HIPAA for vSphere (as of July 2017 – download the management pack)
  • vSphere Hardening Guidelines for 6.5 (Management Pack is Planned, but I can’t provide any dates - sorry)

 

 

Summary

Want to harden your vSphere environment? Do you need to adhere to PCI-DSS or HIPAA regulatory requirements for your vSphere environment? Visit the VMware market place today! https://marketplace.vmware.com

 

 

The post PCI-DSS & HIPAA compliance with vRealize Operations appeared first on VMware Cloud Management.

Read more..

Manage Capacity and Utilization with vRealize Operations 6.6 Dashboards

I hope you enjoyed my last post around running production operations with out of the box dashboards with vRealize Operations Manager 6.6. While that post was focused on providing you a visibility into your environments, with this post we will drill down into the specific topic of Capacity Management in a Cloud Environment.

While I have worked with most of the roles within an IT organization, I believe the most trivial role is of person managing Capacity of a virtual environment. This role requires one to be on their feet at all the times to ensure they are able to get answers to complex problems that revolve around capacity managment. I tend to divide these complex problems into 5 simple categories:

 

1- How much Capacity do I have?

2- How is it being utilized?

3- How much Capacity is left?

4- When will I run out of Capacity?, and

5- When do I need to trigger the next purchase?

 

While the above questions sound simple, when you apply them to a Software Defined Datacenter, they become extremely complex. The complexity is primarily due to the fact that you are sharing physical resources using the hypervisor between multiple operating systems and applications riding on top of virtual machines. While the focus seems to be capacity, another major dimension which one needs to tak care of is Performance. The above mentioned role is also responsible for ensuring that all the virtual machines which are running in this environment are being Served Well. It is essential that the Capacity owner strikes a balance between Performance and Capacity which makes this problem harder to solve.

WithvRealize Operations 6.6we try to answer these questions with the use of out-of-the box dashboards. It was important that all this valuable IP was easily accessible through a centralized console which acts as an anchor for user of vRealize Operations Manager. In order to achieve this, we introuduced a“Getting Started”dashboard which would step you through some useful categories and use cases.

 

Today we will have a look at the second category which is calledCapacity and Utilization. Here is how this categoryshows up on the Getting Started Page:

As mentioned before, Capacity and Utilization category caters to the teams responsible for tracking the utilization of the provisioned capacity in there virtual infrastructure. The dashboards within this category allow you to take capacity procurement decisions, reduce wastage through reclamation, and track usage trends to avoid performance problems due to capacity shortfalls.

Key questions these dashboards help you answer are :

  • How much capacity I have, how much is used and what are the usage trends for a specific vCenter, datacenter or cluster?
  • How much Disk, vCPU or Memory I can reclaim from large VMs in my environment to reduce wastage & improve performance?
  • Which clusters have the highest resource demands?
  • Which hosts are being heavily utilized and why?
  • Which datastores are running out of disk space and who are the top consumers?
  • How is the storage capacity & utilization of my vSAN environment along with savings achieved by enabling deduplication and compression?

 

Let us look at each of these dashboard and I will provide a summary of what these dashboards can do for you along with a quick view of the dashboard:

 

Capacity Overview

The Capacity Overview Dashboard provides you a summary of the total physical capacity available across all your environments being monitored by vRealize Operations Manager. The dashboard provides a summary of CPU, Memory and Storage Capacity provisioned along with the resource reclamation opportunities available in those environments.

Since capacity decisions are mostly tied to logical resource groups, this dashboard allow you to assess Capacity and Utilization at each resource group level such as vCenter, Datacenter, Custom Datacenter or vSphere Cluster. You can quickly select an object and view it’s total capacity and used capacity to understand the current capacity situation. Capacity planning requires you to have visibility into the historical trends and future forecasts, hence the trend views within the dashboard provide you this information to predict how soon you will run out of capacity.

If you plan to report the current capacity situation to others within your organization, you can simply expand the Cluster Capacity Details view on this dashboard and export this as a report for sharing purposes.

To see this dashboard in actionclick here

 

Capacity Reclaimable

The Capacity Reclaimable Dashboard provides you a quick view of resource reclamation opportunities within your virtual infrastructure. This dashboard is focused on improving the efficiency of your environment by reducing the wastage of resources. While this wastage is usually caused by idle or powered off virtual machines another biggest contributor to this wastage is oversized virtual machines.

This dashboard allows you to select an environment and quickly view the amount of capacity which can be reclaimed from the environment in form of reclaimable CPU, Memory and Disk Space.

You can start with the view which lists down all the virtual machines running on old snapshots or are powered off. These VMs provide you the opportunity of reclaiming storage by deleting the old snapshots on them or by deleting the unwanted virtual machines. You can take these action right from this view by using the actions framework available within vRealize Operations Manager.

The dashboard provides you recommended best practices around reclaiming CPU and Memory from large virtual machines in your environment. Since large and oversized virtual machines can increase contention between VMs, you can use the phased approach of using aggressive or conservative reclamation techniques to right size your virtual machines.

To see this dashboard in actionclick here

 

vSAN Capacity Overview

The vSAN Capacity Overview dashboard provides an overview of vSAN storage capacity along with savings achieved by enabling dedupe and compression across all your vSAN clusters.

The dashboard allows you to answer key questions around capacity management such as total provisioned capacity, current and historical utilization trends and future procurement requirements. You can view things like capacity remaining, time remaining and storage reclamation opportunities to take effective capacity management decisions.

The dashboard also focuses on how vSAN is using the disk capacity by showing you a distribution of utilization amongst vSAN disks. You can view these details either as an aggregate or at individual cluster level.

To see this dashboard in actionclick here

 

Datastore Utilization

The Datastore Utilization dashboard is a quick and easy way to identify storage provisioning and utilization patterns in a virtual infrastructure. It is a best practice to have standard datastore sizes to ensure you can easily manage storage in your virtual environments. The heatmap on this dashboard plots each and every datastore monitored by vRealize Operations Manager and groups them by clusters.

The utilization pattern of these datastores is depicted by colors, where grey represent an underutilized datastore, red represents a datastore running out of disk space and green represents an optimally used datastore.

By selecting a datastore, you can see the past utilization trend and forecasted usage. The view within the dashboard will list all the virtual machines running on the selected datastore and provide you with the opportunity to reclaim storage used by large virtual machines snapshots or powered off VMs.

You can use the vRealize Operations Manager action framework to quickly reclaim resources by deleting the snapshots or unwanted powered off VMs.

To see this dashboard in actionclick here

 

Cluster Utilization

The Cluster Utilization dashboard allows you to identify the vSphere clusters that are being heavily consumed from a CPU, memory, disk, and network perspective. High or unexpected resource usage on clusters may result in performance bottlenecks. Using this dashboard you can quickly identify the clusters which are struggling to keep up with the virtual machine demand.

On selecting a cluster with high CPU, Memory, Disk or Network demand, the dashboard provides you with the list of ESXi hosts that are participating in the given cluster. If you notice imbalance between how the hosts within the selected clusters are being used, you might have an opportunity to balance the hosts by moving virtual machines within the cluster.

In situations where the cluster demand has been historically chronic virtual machines should be moved out of these clusters to avoid potential performance issues using Workload Balance. If such patterns are observed on all the clusters in a given environment, it indicates that new capacity might be required to cater to the increase in demand.

To see this dashboard in actionclick here

 

Heavy Hitter VMs

The Heavy Hitter VMs dashboard helps you identify virtual machines which are consistently consuming high amount of resources from your virtual infrastructure. In heavily overprovisioned environments, this might create resource bottlenecks resulting in potential performance issues.

With the use of this dashboard you can easily identify the resource utilization trends of each of your vSphere clusters. Along with the utilization trends, you are also provided with a list of Virtual Machines within those clusters based on their resource demands from CPU, Memory, Disk and Network within your environment. The views also analyze the workload pattern of these VMs over the past week to identify heavy hitter VMs which might be running a sustained heavy workload (measured over a day), or bursty workloads (measure using peak demand).

You can export the list of offenders using these views and take appropriate actions to distribute this demand and reduce potential bottlenecks.

To see this dashboard in actionclick here

 

Host Utilization

The Host Utilization dashboard allows you to identify the hosts that are being heavily consumed from a CPU, memory, disk, and network perspective. High or unexpected resource usage on hosts may result in performance bottlenecks. Using this dashboard you can quickly identify the hosts which are struggling to keep up with the virtual machine demand. The dashboard also provides you with the list of top 10 virtual machines to easily identify the source of this unexpected demand and take appropriate actions.

Since the demand of resources fluctuates over a period of time, the dashboard allows you to look at demand patterns over the last 24 hours to identify hosts which might have a chronic history of high demand. If such cases virtual machines should be moved out of these hosts to avoid potential performance issues. If such patterns are observed on all the hosts of a given cluster, it indicates that new capacity might be required to cater to the increase in demand.

To see this dashboard in actionclick here

 

VM Utilization

The VM Utilization Dashboard helps the VI Administrator to capture the utilization trends of any virtual machine in their environment. The primary use case is to list down the key properties of a virtual machine and the resource utilization trends for a specific time period and share the same with the VM/Application owners.

The VM/Application owners often want to look at the resource utilization trends at specific time periods where they are expecting high load on applications. Activities like, batch jobs, backup schedules, load testing etc. could be a few examples. The application owners want to ensure that VMs are not consuming 100% of the provisioned resources during these periods as that could lead to resource contention within applications causing performance issues.

To see this dashboard in actionclick here

Hope this post will give you insights on how each of these dashboards can help you manage capacity and performance and ensure that you have answer to those difficult questions at the back of you hand. Stay tuned for more information on other categories.

 

The post Manage Capacity and Utilization with vRealize Operations 6.6 Dashboards appeared first on VMware Cloud Management.

Read more..

Introducing “Operations” Dashboards in vRealize Operations 6.6

Now that you have a sneak preview of the launch of vRealize Operations 6.6, it is time that we unwrap the goodies and take you through the new features in detail. One of my favorite areas of vRealize Operations Manager is Dashboards. A dashboard for me is like an empty canvas which allows you to paint the picture of what is most important to you, when it comes to managing day to day IT operations. Whether you are a help desk engineer, or a CIO, to be successful in your role, you need quick access to meaningful information. While their are numerous tools which will provide you data, the art of filtering down that data into information is what matters when it comes to decision making.

Dashboards being an empty slate, allows you to do so in a quick and efficient manner. This enhanced capability allowed us to create multiple out of the box categories matching the various persona of users in an IT organization. This resulted in a set of out-of-the-box dashboards which will help provide you a jump start into running product operations from Day 1. These dashboards are battle tested in large IT organizations and now are a part of vRealize Operations Manager 6.6.

It was important that all this valuable IP was easily accessible through a centralized console which acts as an anchor for user of vRealize Operations Manager. In order to achieve this, we introuduced a “Getting Started” dashboard which would step you through some useful categories and use cases.

 

Today we will have a look at the first category which is called Operations. Here is how operations shows up on the Getting Started Page:

 

The Operations category is most suitable for roles within an organization who require a summary of important data points to take quick decisions. This could be a member of a NOC team who wants to quickly identify issues and take actions, or executives who want a quick overview of their environments to keep a track of important KPIs.

 

Key questions these dashboards help you answer are :

  • What does the infrastructure inventory look like?
  • What is the alert volume trend in the environment?
  • Are virtual machines being served well?
  • Are there hot-spots in the datacenter I need to worry about?
  • What does the vSAN environment look like and are their optimization opportunities by migrating VMs to vSAN?

 

Let us look at each of these dashboard and I will provide a summary of what these dashboards can do for you along with a quick view of the dashboard.

Datastore Usage Overview

The Datastore Usage Dashboard is suitable for a NOC environment. The dashboard provides a quick glimpse of all the virtual machines in your environment using a heatmap. Each virtual machine is represented by a box on the heatmap. Using this dashboard, a NOC administrator can quickly identify virtual machines which are generating high IOPS since the boxes representing the virtual machine are sized by the number of IOPS they are generating.

Along with the storage demand, the color of the boxes represents the latency experienced by these virtual machines from the underlying storage. A NOC administrator can take the next steps in his investigation to find the root cause of this latency and resolve it to avoid potential performance issues.

To see this dashboard in action click here

 

Host Usage Overview

The Host Usage Dashboard is suitable for a NOC environment. The dashboard provides a quick glimpse of all the ESXi hosts in your environment using a heatmap. Using this dashboard the NOC administrator can easily find resource bottlenecks in your environment created due to high Memory Demand, Memory Consumption or CPU Demand.

Since the hosts in the heatmap are grouped by clusters, you can easily find out if you have clusters with high CPU or Memory Load. It can also help you to identify if you have ESXi hosts with the clusters which are not evenly utilized and hence an admin can trigger activities such as workload balance or enable DRS to ensure that hotspots are eliminated.

To see this dashboard in actionclick here

 

Operations Overview

The Operations Overview dashboard provides a high level view of objects which make up you virtual environment. It provides you an aggregate view of virtual machine growth trends across your different datacenters being monitored by vRealize Operations Manager.

The dashboard also provides a list of all your datacenters along with inventory information about how many clusters, hosts and virtual machines you are running in each of your datacenters. By selecting a particular datacenter you can zoom into the areas of availability and performance. The dashboard provides a trend of known issues in each of your datacenters based on the alerts which have triggered in the past.

Along with the overall health of your environment, the dashboard also allows you to zoom in at the Virtual Machine level and list out the top 15 virtual machines in the selected datacenter which might be contending for resources.

To see this dashboard in actionclick here



Optimize vSAN Deployments

The Optimize vSAN deployments dashboard is an easy way to device a migration strategy to move virtual machines from your existing storage to your newly deployed vSAN storage. The dashboard provides you with an ability to select your non vSAN datastores which might be struggling to serve the virtual machine IO demand. By selecting the VMs on a given datastore, you can easily identify the historical IO demand and latency trends of a given virtual machine.

You can then find a suitable vSAN datastore which has the space and the performance characteristics to serve the demand of this VM. With a simple move operation within vRealize Operations Manager, you can move the virtual machine from the existing non vSAN datastore to the vSAN datastore.

Once the VM is moved, you can continue to watch the utilization patterns to see how the VM is being served by vSAN.

To see this dashboard in actionclick here


vSAN Operations Overview

The vSAN Operations Overview Dashboard provides an aggregated view of health and performance of your vSAN clusters. While you can get a holistic view of your vSAN environment and what components make up that environment, you can also see the growth trend of virtual machines which are being served by vSAN.

The goal of this dashboard is to help understand the utilization and performance patterns for each of your vSAN clusters by simply selecting one from the provided list. VSAN properties such as Hybrid or All Flash, Dedupe & Compression or a Stretched vSAN cluster can be easily tracked through this dashboard.

Along with the current state, the dashboard also provides you a historic view of performance, utilization, growth trends and events related to vSAN.

To see this dashboard in actionclick here

 

Hope this post will give you insights on how each of these dashboards can help you run smoother operations and ensure that you have answer to those difficult questions at the back of you hand. Stay tuned for more information on other categories.

 

The post Introducing “Operations” Dashboards in vRealize Operations 6.6 appeared first on VMware Cloud Management.

Read more..

Private vs Public Cloud Costs: Surprising 451 Research Results

Some amazing news coming out of a new study by 451 Research around Private vs Public Cloud Costs. This just published study found that 41% of IT Decision makers feel that their Private Cloud implementation is cheaper than that of a &#rsquo;like for like&#rdquo; Public Cloud environment. Another 24% of respondents indicated that they were paying no more than a 10%premium for their Private Cloud over the cost of a comparable Public Cloud. A companion study captured the stories of fiveenterprises that were achieving TCO for their Private Clouds that are on par with what could be achieved from a cost perspective using the Public Cloud.

 

Exploding a Well Worn Myth Around Private vs Public Cloud Costs

The results of the study fly in the face of what I believe most people would naturally assume. I think that if I were to ask 100 people who have anything to do with IT infrastructure, IT operations or application development: &#rsquo;Can a Private Cloud be as cost effective as a Public Cloud&#rdquo;; a clear majority would respond with a resounding &#rsquo;noway&#rdquo;. The prevailing mythology that Public Cloud is always cheaper is very powerful.

While the primary driver for adopting a cloud strategy, whether Private, Public or Hybrid, generally isn&#rsquo;t cost reduction, the ability to achieve the best possible cost outcome without compromising objectives of agility is still important to most enterprises. To this point, the 451 Research study found that the primary motivators for a Private Cloud were data protection, asset ownership and business process integration. However, the study also found that achieving cost efficiencies, while not the top motivator for building a Private Cloud was still an important factor in deciding on an overall cloud strategy direction.

Believe it or not, it&#rsquo;s hard to find much in the way of data around Private Cloud vs Public Cloud Costs. Because our customers are always asking us for information on this topic, VMware asked 451 Research, a top analyst research firm to carry out a study on Private versus Public Cloud costs. We picked 451 Research because of their well-respected work on their Cloud Pricing Index program.

The Cloud Pricing Index program that 451 Research conducts has been around for many years now. Through it, 451 Research continuously looks at the cost of various Public Cloud offerings. As part of that same research they also have developed guidelines on what they call &#rsquo;golden ratios&#rdquo;. These golden ratios provide guidance on the maximum number of administrators an organization can have relative to the number of VMs they run and still be as cost effective as a comparable Public Cloud offering. The 451-team&#rsquo;s long history of successfully analyzing Public Cloud costs and their understanding of what it takes to run a Private Cloud at similar cost efficiencies gave us a high degree of confidence that they had the right stuff to help answer the question of whether Private Clouds can ever be as cost effective as Public Clouds.

Driving Lower Private Cloud Costs Relies On Capacity Planning, Automation and Costing

Last year I wrote a blog titled &#rsquo;Extreme Automation also the Key to Successful Private Clouds.” The main idea behind that blog was that Public Cloud providers are cost effective because they are masters of using automation to reduce the cost of IT operations for the environments they rent out. An extension of that point was that builders of Private Clouds could be effective at achieving TCOs comparable to Public Cloud if they embraced Automation with the same zeal as did Public Cloud providers.

This most recent 451 Research study validated this idea. The study found that capacity planning (a form of automation) and more general IT process automation technologies were two of the strongest contributors to the outcome of achieving a Private Cloud that had a TCO on a per unit cost basis that was as good or better than that of comparable Public Cloud offerings. Rounding out the top four contributors to successful outcomes were the effective use of cost/budgetary tools (also a form of automation) and the ability to achieve favorable license terms from technology vendors.

The second 451 Research paper adds depth to the findings of the first paper by capturing key parts of the stories of five companies who have implemented Private Cloud and are achieving solid cost efficiencies as part of their efforts. Their experiences highlight specific benefits they have achieved through the use various automation technologies. For instance, one company indicated that they were able to reduce the FTE cost associated with the delivery of IT services by as much as 80% while simultaneously increasing overall staff productivity by as much 25%.

Your Company Can Achieve Similar Results

As already pointed out, from a technology standpoint, the core capabilities cited by 451 Research as being necessary to achieve a highly cost efficient Private Cloud were capabilities in the areas of capacity planning, automation and costing. These three areas are addressed by core capabilities delivered through the VMware vRealize Suite (part of the VMware SDDC).

vRealize Suite is designed to work hand in hand with VMware Cloud Foundation to provide a common approach to building and running an enterprise-grade hybrid cloud. Hybrid clouds based on the VMware Software-Defined Data Center provide high levels of agility, efficiency and control across both traditional and cloud native, container-based applications. VMware-based clouds also give IT organizations the ability to provide choice to their application development teams by supporting multiple models for requesting services and giving developers the freedom to use the tools of their choice.

Our experience as a leading provider of cloud solutions is that most organizations are headed in the direction of Hybrid Cloud. However, given that the majority of all the world&#rsquo;s workloads still reside in on-premise data centers, it makes sense for organizations to embrace an approach to their Private Cloud that results in achieving both high levels of agility and cost efficiency. Organizations shouldn&#rsquo;t have to sacrifice one for the other. Also, if IT teams are going to be credible with line of business around their ability to effectively manage a Hybrid Cloud they will first need to prove that they can master their own Private Cloud environments.

Building and running a Private Cloud that is as cost effective as a Public Cloud is not easy. If it were, all organizations would be doing it. But the fact that 41% of companies surveyed were able to build and run a Private Cloud at cost efficiencies that were better than Public Cloud alternatives shows that it can be done. This 41% also points the way to the “How” of achieving greater cost efficiencies. A willingness to invest in capacity planning, automation and costing technologies was fundamental to their success. If you are in the 41% group, congratulations and keep up the good work. If you are not there yet, take heart. It definitely is achievable. In the end, there may be many reasons that you decide to move workloads to the Public Cloud, but cost doesn&#rsquo;t have to be one of them.

Learn more

  • 451 Research Paper:Can private cloud be cheaper than public cloud?41% said yes, and the survey reveals how (Free download)
  • 451 Research Paper:Can private cloud be cheaper than public cloud?Real stories about how companies run their private cloud cheaper (Free download)
  • Visit VMware vRealize product page
  • Try our Automate IT Hands-on Labs
  • Try our Intelligent Operations Hands-on Labs

The post Private vs Public Cloud Costs: Surprising 451 Research Results appeared first on VMware Cloud Management.

Read more..

Private vs Public Cloud Costs: Surprising 451 Research Results

Some amazing news coming out of a new study by 451 Research around Private vs Public Cloud Costs. This just published study found that 41% of IT Decision makers feel that their Private Cloud implementation is cheaper than that of a &#rsquo;like for like&#rdquo; Public Cloud environment. Another 24% of respondents indicated that they were paying no more than a 10%premium for their Private Cloud over the cost of a comparable Public Cloud. A companion study captured the stories of fiveenterprises that were achieving TCO for their Private Clouds that are on par with what could be achieved from a cost perspective using the Public Cloud.

 

Exploding a Well Worn Myth Around Private vs Public Cloud Costs

The results of the study fly in the face of what I believe most people would naturally assume. I think that if I were to ask 100 people who have anything to do with IT infrastructure, IT operations or application development: &#rsquo;Can a Private Cloud be as cost effective as a Public Cloud&#rdquo;; a clear majority would respond with a resounding &#rsquo;noway&#rdquo;. The prevailing mythology that Public Cloud is always cheaper is very powerful.

While the primary driver for adopting a cloud strategy, whether Private, Public or Hybrid, generally isn&#rsquo;t cost reduction, the ability to achieve the best possible cost outcome without compromising objectives of agility is still important to most enterprises. To this point, the 451 Research study found that the primary motivators for a Private Cloud were data protection, asset ownership and business process integration. However, the study also found that achieving cost efficiencies, while not the top motivator for building a Private Cloud was still an important factor in deciding on an overall cloud strategy direction.

Believe it or not, it&#rsquo;s hard to find much in the way of data around Private Cloud vs Public Cloud Costs. Because our customers are always asking us for information on this topic, VMware asked 451 Research, a top analyst research firm to carry out a study on Private versus Public Cloud costs. We picked 451 Research because of their well-respected work on their Cloud Pricing Index program.

The Cloud Pricing Index program that 451 Research conducts has been around for many years now. Through it, 451 Research continuously looks at the cost of various Public Cloud offerings. As part of that same research they also have developed guidelines on what they call &#rsquo;golden ratios&#rdquo;. These golden ratios provide guidance on the maximum number of administrators an organization can have relative to the number of VMs they run and still be as cost effective as a comparable Public Cloud offering. The 451-team&#rsquo;s long history of successfully analyzing Public Cloud costs and their understanding of what it takes to run a Private Cloud at similar cost efficiencies gave us a high degree of confidence that they had the right stuff to help answer the question of whether Private Clouds can ever be as cost effective as Public Clouds.

Driving Lower Private Cloud Costs Relies On Capacity Planning, Automation and Costing

Last year I wrote a blog titled &#rsquo;Extreme Automation also the Key to Successful Private Clouds.” The main idea behind that blog was that Public Cloud providers are cost effective because they are masters of using automation to reduce the cost of IT operations for the environments they rent out. An extension of that point was that builders of Private Clouds could be effective at achieving TCOs comparable to Public Cloud if they embraced Automation with the same zeal as did Public Cloud providers.

This most recent 451 Research study validated this idea. The study found that capacity planning (a form of automation) and more general IT process automation technologies were two of the strongest contributors to the outcome of achieving a Private Cloud that had a TCO on a per unit cost basis that was as good or better than that of comparable Public Cloud offerings. Rounding out the top four contributors to successful outcomes were the effective use of cost/budgetary tools (also a form of automation) and the ability to achieve favorable license terms from technology vendors.

The second 451 Research paper adds depth to the findings of the first paper by capturing key parts of the stories of five companies who have implemented Private Cloud and are achieving solid cost efficiencies as part of their efforts. Their experiences highlight specific benefits they have achieved through the use various automation technologies. For instance, one company indicated that they were able to reduce the FTE cost associated with the delivery of IT services by as much as 80% while simultaneously increasing overall staff productivity by as much 25%.

Your Company Can Achieve Similar Results

As already pointed out, from a technology standpoint, the core capabilities cited by 451 Research as being necessary to achieve a highly cost efficient Private Cloud were capabilities in the areas of capacity planning, automation and costing. These three areas are addressed by core capabilities delivered through the VMware vRealize Suite (part of the VMware SDDC).

vRealize Suite is designed to work hand in hand with VMware Cloud Foundation to provide a common approach to building and running an enterprise-grade hybrid cloud. Hybrid clouds based on the VMware Software-Defined Data Center provide high levels of agility, efficiency and control across both traditional and cloud native, container-based applications. VMware-based clouds also give IT organizations the ability to provide choice to their application development teams by supporting multiple models for requesting services and giving developers the freedom to use the tools of their choice.

Our experience as a leading provider of cloud solutions is that most organizations are headed in the direction of Hybrid Cloud. However, given that the majority of all the world&#rsquo;s workloads still reside in on-premise data centers, it makes sense for organizations to embrace an approach to their Private Cloud that results in achieving both high levels of agility and cost efficiency. Organizations shouldn&#rsquo;t have to sacrifice one for the other. Also, if IT teams are going to be credible with line of business around their ability to effectively manage a Hybrid Cloud they will first need to prove that they can master their own Private Cloud environments.

Building and running a Private Cloud that is as cost effective as a Public Cloud is not easy. If it were, all organizations would be doing it. But the fact that 41% of companies surveyed were able to build and run a Private Cloud at cost efficiencies that were better than Public Cloud alternatives shows that it can be done. This 41% also points the way to the “How” of achieving greater cost efficiencies. A willingness to invest in capacity planning, automation and costing technologies was fundamental to their success. If you are in the 41% group, congratulations and keep up the good work. If you are not there yet, take heart. It definitely is achievable. In the end, there may be many reasons that you decide to move workloads to the Public Cloud, but cost doesn&#rsquo;t have to be one of them.

Learn more

  • 451 Research Paper:Can private cloud be cheaper than public cloud?41% said yes, and the survey reveals how (Free download)
  • 451 Research Paper:Can private cloud be cheaper than public cloud?Real stories about how companies run their private cloud cheaper (Free download)
  • Visit VMware vRealize product page
  • Try our Automate IT Hands-on Labs
  • Try our Intelligent Operations Hands-on Labs

The post Private vs Public Cloud Costs: Surprising 451 Research Results appeared first on VMware Cloud Management.

Read more..

Private vs Public Cloud Costs: Surprising 451 Research Results

Some amazing news coming out of a new study by 451 Research around Private vs Public Cloud Costs. This just published study found that 41% of IT Decision makers feel that their Private Cloud implementation is cheaper than that of a &#rsquo;like for like&#rdquo; Public Cloud environment. Another 24% of respondents indicated that they were paying no more than a 10%premium for their Private Cloud over the cost of a comparable Public Cloud. A companion study captured the stories of fiveenterprises that were achieving TCO for their Private Clouds that are on par with what could be achieved from a cost perspective using the Public Cloud.

 

Exploding a Well Worn Myth Around Private vs Public Cloud Costs

The results of the study fly in the face of what I believe most people would naturally assume. I think that if I were to ask 100 people who have anything to do with IT infrastructure, IT operations or application development: &#rsquo;Can a Private Cloud be as cost effective as a Public Cloud&#rdquo;; a clear majority would respond with a resounding &#rsquo;noway&#rdquo;. The prevailing mythology that Public Cloud is always cheaper is very powerful.

While the primary driver for adopting a cloud strategy, whether Private, Public or Hybrid, generally isn&#rsquo;t cost reduction, the ability to achieve the best possible cost outcome without compromising objectives of agility is still important to most enterprises. To this point, the 451 Research study found that the primary motivators for a Private Cloud were data protection, asset ownership and business process integration. However, the study also found that achieving cost efficiencies, while not the top motivator for building a Private Cloud was still an important factor in deciding on an overall cloud strategy direction.

Believe it or not, it&#rsquo;s hard to find much in the way of data around Private Cloud vs Public Cloud Costs. Because our customers are always asking us for information on this topic, VMware asked 451 Research, a top analyst research firm to carry out a study on Private versus Public Cloud costs. We picked 451 Research because of their well-respected work on their Cloud Pricing Index program.

The Cloud Pricing Index program that 451 Research conducts has been around for many years now. Through it, 451 Research continuously looks at the cost of various Public Cloud offerings. As part of that same research they also have developed guidelines on what they call &#rsquo;golden ratios&#rdquo;. These golden ratios provide guidance on the maximum number of administrators an organization can have relative to the number of VMs they run and still be as cost effective as a comparable Public Cloud offering. The 451-team&#rsquo;s long history of successfully analyzing Public Cloud costs and their understanding of what it takes to run a Private Cloud at similar cost efficiencies gave us a high degree of confidence that they had the right stuff to help answer the question of whether Private Clouds can ever be as cost effective as Public Clouds.

Driving Lower Private Cloud Costs Relies On Capacity Planning, Automation and Costing

Last year I wrote a blog titled &#rsquo;Extreme Automation also the Key to Successful Private Clouds.” The main idea behind that blog was that Public Cloud providers are cost effective because they are masters of using automation to reduce the cost of IT operations for the environments they rent out. An extension of that point was that builders of Private Clouds could be effective at achieving TCOs comparable to Public Cloud if they embraced Automation with the same zeal as did Public Cloud providers.

This most recent 451 Research study validated this idea. The study found that capacity planning (a form of automation) and more general IT process automation technologies were two of the strongest contributors to the outcome of achieving a Private Cloud that had a TCO on a per unit cost basis that was as good or better than that of comparable Public Cloud offerings. Rounding out the top four contributors to successful outcomes were the effective use of cost/budgetary tools (also a form of automation) and the ability to achieve favorable license terms from technology vendors.

The second 451 Research paper adds depth to the findings of the first paper by capturing key parts of the stories of five companies who have implemented Private Cloud and are achieving solid cost efficiencies as part of their efforts. Their experiences highlight specific benefits they have achieved through the use various automation technologies. For instance, one company indicated that they were able to reduce the FTE cost associated with the delivery of IT services by as much as 80% while simultaneously increasing overall staff productivity by as much 25%.

Your Company Can Achieve Similar Results

As already pointed out, from a technology standpoint, the core capabilities cited by 451 Research as being necessary to achieve a highly cost efficient Private Cloud were capabilities in the areas of capacity planning, automation and costing. These three areas are addressed by core capabilities delivered through the VMware vRealize Suite (part of the VMware SDDC).

vRealize Suite is designed to work hand in hand with VMware Cloud Foundation to provide a common approach to building and running an enterprise-grade hybrid cloud. Hybrid clouds based on the VMware Software-Defined Data Center provide high levels of agility, efficiency and control across both traditional and cloud native, container-based applications. VMware-based clouds also give IT organizations the ability to provide choice to their application development teams by supporting multiple models for requesting services and giving developers the freedom to use the tools of their choice.

Our experience as a leading provider of cloud solutions is that most organizations are headed in the direction of Hybrid Cloud. However, given that the majority of all the world&#rsquo;s workloads still reside in on-premise data centers, it makes sense for organizations to embrace an approach to their Private Cloud that results in achieving both high levels of agility and cost efficiency. Organizations shouldn&#rsquo;t have to sacrifice one for the other. Also, if IT teams are going to be credible with line of business around their ability to effectively manage a Hybrid Cloud they will first need to prove that they can master their own Private Cloud environments.

Building and running a Private Cloud that is as cost effective as a Public Cloud is not easy. If it were, all organizations would be doing it. But the fact that 41% of companies surveyed were able to build and run a Private Cloud at cost efficiencies that were better than Public Cloud alternatives shows that it can be done. This 41% also points the way to the “How” of achieving greater cost efficiencies. A willingness to invest in capacity planning, automation and costing technologies was fundamental to their success. If you are in the 41% group, congratulations and keep up the good work. If you are not there yet, take heart. It definitely is achievable. In the end, there may be many reasons that you decide to move workloads to the Public Cloud, but cost doesn&#rsquo;t have to be one of them.

Learn more

  • 451 Research Paper:Can private cloud be cheaper than public cloud?41% said yes, and the survey reveals how (Free download)
  • 451 Research Paper:Can private cloud be cheaper than public cloud?Real stories about how companies run their private cloud cheaper (Free download)
  • Visit VMware vRealize product page
  • Try our Automate IT Hands-on Labs
  • Try our Intelligent Operations Hands-on Labs

The post Private vs Public Cloud Costs: Surprising 451 Research Results appeared first on VMware Cloud Management.

Read more..

How to configure Auto-Scaling for Private Cloud

Purpose:

Have you checked the auto-scaling feature provided in public cloud solutions like AWS and Azure and wished to get the same feature in your private cloud environment? Do you have an existing private cloud environment or building a new one and want to make it auto-scale enabled? This post covers this exact topic. It details what auto-scaling is and provides step by step guide on how you can build one using various VMware products.

Introduction:

In recent months, during my interactions with customers, one requirement came up pretty often than others. That is of auto-scaling. Seems majority of customers who deploy Private Cloud require auto-scaling in some or other formats. Since out of the box vRealize Automation provides “Scale-Out” and “Scale-In” functions (albeit manual), these can be used in conjunction with other products to provide auto-scaling functionality. I had to configure this feature for multiple customers, so thought of writing a blog post detailing the steps. Readers can follow the blog to do it themselves. Also, auto-scaling is very dynamic in nature. Typically auto-scaling parameter requirement changes from environment to environment. Keeping that mind I have explained the steps involved so that you can customize it as per your need.

Required prior knowledge:

Though you can simply import the package in vRealize Orchestrator and follow the guide to configure other products, having a knowledge of the following will help you further configure it.

  • Working knowledge of vRealize Orchestrator
  • If you want to customize the workflows, then you need to know a bit of JavaScript
  • For configuration of Webhook Shims, basic knowledge of Linux will help (though not strictly required, John Dias did an amazing job providing step by step guide).
  • Familiarity with vRealize Operations Manager will help
  • Working knowledge of vRealize Automation is required
  • If you want to replicate my example of multi-tier application with application installation at runtime. Then you need to know NSX usage and advanced configuration of blueprints in vRealize Automation.

If you are using vRealize Automation 7.2 and prior, then this blog post on NSX integration with vRA will help. Integration method has changed in 7.3 and it is simplified a lot. Check the VMware documentation on how it is done in vRA 7.3.

On how to configure Software components you can check my earlier blog post here.

Acknowledgement:

Before I start writing this blog I need to say thanks to few people. Though I demonstrated this feature (with PowerCLI and vCenter) 2 years back to a customer, it was never a true auto-scaling solution. So, here it goes:

  • First and loudest thanks toCarsten Schaefer for com.vmware.pso.cemea.autoscaling package. It had the main “Scale Up Blueprint Layer based on VM MOID” and “Scale Down Blueprint Layer based on VM MOID”. All my other works are based on these two core workflows. These two workflows do the actual task. So thanks a lot mate for your hard work and help.
  • Thanks to Vishal Jain, Diwan Chandrabose, Ajay Kalla and team for the Load balancer handling script. Normally when an alert is fired, it is based on a VM. But when network load comes from Load-Balancer and it fires an alert, we get the load balancer name. The script co-relating the load balancer to the corresponding virtual machine is written by the team. They showed how we can use NSX and vROps integration to handle load balancer parameters. Thanks a lot guys for this.
  • Last but not least Vinith Menon, I was wondering how I would put a load on the test website. I was thinking of using JMeter. But it was too much to just put HTTP requests on a web page. Your one liner is absolutely fantastic and time saver for me. Thanks a lot brother for that.

My friend Vinith Menon also have written a blog post on auto-scaling. You can check it here.

Where to get the package for auto-scaling?

I have created a single vRealize Orchestrator package containing all the workflows, SNMP Policy and Action items. Download the package from the GitHub repository(https://github.com/sajaldebnath/auto-scaling-vra) and import it into vRealize Orchestrator server. Rest of the details are provided in the rest of the blog post.

What is included in the package?

The following workflows are included in the package:

  • Scale Down Blueprint Layer based on VM MOID
  • Scale Up Blueprint Layer based on VM MOID
  • Scale Down vRA Deployment based on LB Load - SNMP
  • Scale Up vRA Deployment based on LB Load - SNMP
  • Scale Down vRA Deployment based on CPU-Mem Load - vROps REST Notification
  • Scale Up vRA Deployment based on CPU-Mem Load - vROps REST Notification
  • Scale Down vRA Deployment based on LB Load - vROps REST Notification
  • Scale Up vRA Deployment based on LB Load - vROps REST Notification

The helper workflows are:

  • Count VMs in Layer
  • Get VM Name from vROps REST Notification
  • JSON Invoke a REST operation
  • Submitto vRA

The action items are:

  • getCatalogResourceForIaasVmEntity
  • findObjectById
  • getVirtualMachineProperties

The included SNMP policy is:

  • vROPS SNMP Trap for NSX

Note, the first two workflows are core workflows (written by Carsten) all other workflows depends on these two to get the work done. If you are not using Webhook Shims, then you do not need to configure workflows which endds with “vROps REST Notification”. Also, for SNMP, you do not need to configure “Get VM Name from vROps REST Notification” and “JSON Invoke a REST operation” workflows. Alternately, if you are not going to use SNMP traps, then you do not need to configure the SNMP policy.

Pre-Requisites:

Before you can run everything you need to have the environment ready. I used the following versions:

  • vRealize Automation 7.3
  • vCenter & vSphere 6.5
  • vRealize Operations 6.5
  • vRealize Orchestrator 7.3 (internal to vRA)
  • NSX 6.3
  • Webhook Shims

The workflows should work with other versions as well. You need to have these products installed, configured and integrated to follow the example end to end.

 

 

 

Conclusion:

You can use the steps detailed in the video to configure auto-scaling in your environment. This is an amazing feature. It will be a real help if you can test it out and let me know the outcome. Also, any further suggestions are welcome. I hope this helps you as it helped me. Do provide me feedbacks on this. Also, let me know if I missed something or you need further clarification.

 

The post How to configure Auto-Scaling for Private Cloud appeared first on VMware Cloud Management.

Read more..

vRealize Operations 6.6: “I’m too sexy for my skin!”

vRealize Operations 6.6 gets sleeker and sexier in it’s new skin!

On June 13, many of the vRealize Suite components (namely vRealize Operations (vR Ops), vRealize Log Insight (vRLI), and vRealize Business for Cloud (vRBC) had an update GA release. With this latest release of vR Ops a big focus has been on “simplifying the user experience” and “quicker time to value”. We really want to simplify the lives of “Anita” the VI Admin, “Brian” the Infrastructure and Operations Admin, “Ethan” the Fire Fighter.

The slick new HTML 5 UI is based on the Clarity Design System. Upon login to vR Ops you will see the following screen. You will notice that in the left-hand pane we have single-click links to commonly used environment overview dashboards, including Workload Balancing, as well as bringing Log Analytics, and Business and Cost Insights all into one place - vR Ops.

 

 

See the FULL picture!

While we’re here let us first take a quick peek at bringing together Log Analytics with vR Ops. We commonly refer to this as “360 degree troubleshooting” as you are able to troubleshoot across structured and unstructured data in one place.

 

See the BIG picture!

Secondly, take a look at how Cloud Costing and vR Ops come together. Imagine being able to do things like capacity management or forecasting and being able to see the cost associated with these activities, or looking at reclaiming capacity and being able to associate a dollar figure to the potential cost (and resource) savings.

 

BALANCE your life, yes you “Anita”, “Brian” and “Ethan”!

WOW, how about the enhanced Workload Balancing? Validate and modify DRS settings; Rebalance unbalanced Data Centers or Custom Data Centers, and Automate!

 

Persona-Based Content

Let’s head over to the DASHBOARD page. Start with the “Getting Started” dashboard. This is a Persona-Based dashboard that allows the user to look at five different categories of dashboards and very quick open any of them from there. These categories are: Operations, Capacity & Utilization, Performance Troubleshooting, Workload Balance, and Configuration & Compliance. Included in there are also vSAN dashboards as well as vRealize Automation (vRA) dashboards. In this release both vSAN and vRA are natively supported, so anyone using vSAN or vRA can quickly take advantage of this native support.

 

Resolve Issues Faster

What about Alerting? You can now slice-and-dice alerts any which way you want to help you accelerate resolving alerts and fixing issues faster.

 

Secure the Software Defined Data Center

So what about securing the Software Defined Data Center (SDDC)? Well, that’s really important! You can now install - out of band - PCI DSS and HIPAA compliance content for the vSphere environment. This helps organizations with regulatory requirements improve their compliance posture.

 

Summary

vRealize Operations 6.6 has made some incredible improvements inspired by many of you out there that continue to challenge VMware and the Cloud Management Business Unit to do better! Thank you! Simplification; quicker time to value; persona-based dashboards and troubleshooting flows; enhanced fully automatable workload balancing; improved alerting to resolve issues quicker, and better securing the SDDC, are just scratching the surface of what vRealize Operations and vRealize Suite can help you with. I hope you enjoy this release!

 

Download and Try vRealize Operations 6.6 here!

The post vRealize Operations 6.6: “I’m too sexy for my skin!” appeared first on VMware Cloud Management.

Read more..

Go Que Newsroom Categories

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 46 bytes) in /home/content/36/8658336/html/goquecom/wp-includes/wp-db.php on line 1995