Search
  • Brock Peterson

VMware vROps What-If Analysis Part 2

Late last year we discussed the VMware vRealize Operations (vROps) What-If Analysis feature: https://www.brockpeterson.com/post/vmware-vrealize-operations-what-if-analysis


As of vROps 8.3 there were six tiles available in the What-If Analysis pane.

vROps 8.4 introduced a new tile called Migration Planning: VMware Cloud.

This gives you the ability to plan and compare workload migrations across the three different VMware Clouds: VMC on AWS, Microsoft Azure Solution (AVS) and Google Cloud VMware Engine (GCVE). Let's run a scenario, go to the tile and click PLAN MIGRATION.

In this case I've chosen the US East region in VMC on AWS, the i3.metal Instance Type (the hardware your Cluster will be comprised of), and given 10% for both Slack Space and Steady state CPU headroom. VMC on AWS Host types can be found here: https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-98FD3BA9-8A1B-4500-99FB-C40DF6B3DA95.html


General documentation for VMware Cloud Migration can be found here: https://docs.vmware.com/en/vRealize-Operations-Manager/8.4/com.vmware.vcom.core.doc/GUID-4E3C5B1F-99EB-4A8A-B90B-48BC21BE6DA5.html


Once you've chosen your VMware Cloud and specified your Cluster Settings, you configure the APPLICATION Profile, which is to say: are we migrating new workloads or would you like to run a scenario for the migration of existing VMs. Let's see what the migration of a handful of existing on-prem VMs looks like. Select the Import from existing VM radio button then click the SELECT VMs box.

In this case, I'm running a scenario for the migration of all on-prem workloads with the string "peterson" in them. Click OK.

We now see a list of the VMs we're migrating to VMC on AWS and have the ability to configure projected growth and/or vSAN if applicable. Click RUN SCENARIO.

The scenario shows the total amount of CPU, Memory, and Disk to be migrated. The What-If Analysis engine has recommended 4 Hosts and given predictions on Total Capacity Usage as well as Total Cost. In this case, running my 8 VMs in VMC on AWS will cost me $12,385 per month for a 3 year subscription. I can adjust that by clicking CHANGE PLAN.

I can also adjust the discount I'm receiving by selecting the EDIT DISCOUNT link. This same process holds true for migrations off premises to AVS and GCVE.


Now, let's talk about migrating to Public Cloud. Click the PLAN MIGRATION link in the Migration Planning: Public Cloud tile.

We have four Public Cloud options, I've chosen a region in Microsoft Azure. Note that you can add other cloud providers by clicking the ADD CLOUD PROVIDER link.


Next configure the VMs you'd like to migrate or import configurations of existing VMs. We will import the same on-prem VMs we used previously.

Click RUN SCENARIO.

The Private Cloud column is a list of my on-premises VMs and the Microsoft Azure column represents what the What-If Analysis engine is recommending for workload sizing in Azure. Where does it get these sizes? From the corresponding Public Cloud Rate Card, found in Administration - Configuration - Cost Settings.

Let's explore the Microsoft Azure Rate Card by clicking the ellipses (three dots) next to Microsoft Azure and Download. Once downloaded, let's have a look, it's just a spreadsheet.

You'll notice that there are Instance Names (corresponding to what the What-If Analysis is recommending), vCPU, Ram, OS, Geo, Storage, and more. The What-If Analysis takes your on-premises workload, finds the Geo you want to put it in, then sizes accordingly. Let's look at the first on-prem VM in our list as an example.

The What-If Analysis migration algorithm looks like this:

  1. Find the region - in this case us-central

  2. Find the Public Cloud VM template with the same number of vCPU - in this case 4

  3. Find the Public Cloud VM template with the same amount of Memory - in this case 12GB. If not an exact match the algorithm will take the next size up.

  4. Find the Public Cloud VM template with the same amount of Disk - in this case 200GB. Current Disk Usage (Virtual Machine - Disk Space - Virtual Machine used (GB)) is also considered for scenarios migrating existing workloads.

In our example above, filtering through the Azure Rate Card gives us this:

You'll notice it's the D3 V2 template called out by the What-If Analysis. If the amount of Total Storage that comes with the template isn't enough to match the on-prem VM being migrated, the What-If Analysis engine will capture more storage (via the Storage tab on the Rate Card). An example here would be my on-prem VM called vrlcm-bpeterson.

The What-If Analysis is recommending an Azure D2 V2 VM, which is a 2 vCPU, 7GB Memory, 100GB Disk template. However, we need 112GB Disk, so we go to the Storage tab and find the selected region. We then filter out the options with FALSE in the "Use by Recommender" field, these are not to be used. In this case, we are left with the following.

The amounts are shown in column I, 8GB isn't enough, so the next option is an additional 16GB, which gets us to a total of 116GB Disk. Clicking the information bubble next to the 116GB shown in the What-If Scenario shows you exactly what disk is being recommended.

There are times the What-If Analysis engine will recommend less Disk as well. This is based on the current usage, specifically the metric Disk Space - Virtual Machine used (GB). An example here might look something like this.

While the vCPU and Memory recommendations are clear, the Disk being recommended is far less than what is currently allocated. Why? Well, in short, because far less is being currently used. In this case, how much is being used?

While 1TB Disk is allocated, only 271GB is being used, the What-If Analysis engine will consider this. Let's go to our matrix.

An B8MS Azure VM is being recommended, which comes with 64GB Disk. For more Disk we go to the Storage tab. Filter as we did previously, and select the necessary amount of Disk, in this case an additional 256GB (as 128GB isn't enough). Giving us a total of 320GB Disk, which is enough to cover the 271GB currently in use.


Thanks to Senior Product Manager, Meera Menon, for your collaboration on this blog!





175 views0 comments

Recent Posts

See All