Everything You Wanted to Know About VCF Operations Costing
- Brock Peterson
- Jul 27
- 7 min read
Updated: Jul 29
As soon as you set the Currency in VCF Operations, the Cost Engine starts calculating costs. It discovers your ESXi Hosts, assigns a cost to them, sums them up into Cluster cost, then calculates CPU/Memory/Disk base rates, which are used to calculate VM costs. There is a lot that goes into this process, we'll explore in more detail here, let's go!
All screenshots are taken from VCF Operations 9. To turn on the Cost Engine, go to Administration - Global Settings - Cost/Price and set your Currency.

We can now change Currency in VCF Operations 9, this wasn't previously possible. You can also set the time the Cost Engine runs, which is once daily at that time. You can always trigger a manual cost calcuation via Administration - Control Panel - Cost Calculation if you'd like.
Once set, the Cost Engine will discover your ESXi Host Hardware and assign a cost to each server based on industry standard average MSRP pricing, which you can adjust. Go to Infrastructure Operations - Configurations - Cost Drivers to see what server hardware has been discovered. Mine looks like this.

Expand the carrot next to one of them to see details.

Here you can adjust your cost if you'd like, change the purchase date, the type from owned to leased, and the percentage of the price that gets allocated to Compute. In this case I'm looking at Hyper-Converged servers, so this is the percentage of cost that gets allocated to CPU/Memory. When looking at Traditional servers (non Hyper-Converged) the percentage between CPU and Memory is done internally.
The Cost Engine then sums up the server costs (and anything else you've added via Cost Drivers) and assigns it to the Cluster they comprise. For example, if I had a Cluster with 5 ESXi Hosts based on the above cost, the Cluster Cost would be $13,005 x 5 = $65,025.
Next, the Cost Engine calculates CPU, Memory, and Storage base rates based on what you have configured in Infrastructure Operation - Configurations - Cluster Cost.
There are two options from here and another enabled by setting your Cluster Overcommit Ratios in Policies. Note that one of these first two will always be running, even if you enable Allocation based costing in your Policy. In Summary:
Costing based on Cluster Usable Capacity after HA and Buffer: CPU, Memory, and Disk base rates are calculated based on usable capacity after HA/Buffer. Costs here are as follows: CPU is per GHz used, Memory is per GB used, and Disk is per GB used. Unused Cluster capacity isn't assigned, VM cost is only for what they are actually using. This is the default. Adjust Buffer settings in Policies - Capacity - Cluster Compute Resource - Capacity Buffer.
Costing based on Cluster Actual Utilization: CPU, Memory, and Disk base rates are calculated based on actual utilization of the Cluster. Cost here is as follows: CPU is per GHz used, Memory is per GB used, and Disk is per GB used. However, all cost is allocated in this model, meaning if there is only 1 VM running in this Cluster, it will assume the entire Cost of the Cluster.
Allocation. CPU and Memory base rates are based on Cluster totals and overcommit ratios set in your Profile capacity tab. To enable cost based on Allocation, go to your Policy - Capacity Tile - Capacity Settings - unlock the Allocation Model padlock, and set your overcommit ratios. If you don't set an overcommit ratio here for CPU, Memory, and Disk, whatever method you selected above will be used.

Let's dig into the details, but for reference the documentation can be found here. There are three ways the Cost Engine calculates base rates (two listed here and one in Policies). The first is based on usable capacity after HA and Buffer. As documented here, this is how base rates are determined:
VCF Operations calculates the cost of the Cluster from the cost drivers, it sums up hardware costs and adds anything else we've defined in Cost Drivers. After the cost of a Cluster is determined, this cost is split into CPU and Memory costs based on the industry standard cost ratios for the different models of the server (which is adjustable) in Cost Drivers. For Hyper-Converged servers you can define the split between Compute (CPU/Memory) and Storage via Cost Drivers.
The CPU base rate is computed by dividing the CPU cost of the Cluster by the CPU capacity of the Cluster. CPU base rate is then prorated by dividing the CPU base rate by expected CPU utilization to arrive at a true base rate for VMs. Expected CPU utilization in this case is 100%. For example, if the CPU cost of the Cluster is $100 and the CPU Capacity is 100 Ghz, the CPU base rate would be $1/Ghz. The expected CPU utilization in the Cluster is 100%, we take the $1/Ghz and divide it by 1 to get a normalized base rate of $1/Ghz.
The Memory base rate is computed by dividing the Memory cost of the Cluster by the Memory capacity of the Cluster. Memory base rate is then prorated by dividing the Memory base rate by expected memory use percentage to arrive at true base rate for VMs. For example, if the Memory for of the Cluster is $100 and the Memory Capacity is 100GB, the Memory base rate would be $1/GB. Now if the expected Memory utilization in the Cluster is 100%, we take the $1/GB and divide it by 1 to get a normalized base rate of $1/GB.
The base rates are monthly rates.
In this model, VM cost is based on what they are using. There is unallocated cost, meaning the entire cost of a Cluster won't be assigned to a single VM if that VM is the only VM in the Cluster. This is the most frequently used method of costing in VCF Operations and is the default.
The second way the Cost Engine calculates base rates is Cluster Actual Utilization.
VCF Operations calculates the cost of the Cluster from the cost drivers, it sums up hardware costs and adds anything else we've defined in Cost Drivers. After the cost of a Cluster is determined, this cost is split into CPU and Memory costs based on the industry standard cost ratios for the different models of the server (which is adjustable) in Cost Drivers.
The CPU base rate is computed by dividing the CPU cost of the Cluster by the CPU utilization of the Cluster. CPU base rate is then prorated by dividing the CPU base rate by expected CPU use percentage to arrive at a true base rate for VMs. For example, if the CPU cost of the Cluster is $100 and the CPU Capacity is 100 Ghz, the CPU base rate would be $1/Ghz. Now if the expected CPU utilization in the Cluster is 60%, we take the $1/Ghz and divide it by .6 to get a normalized base rate of $1.67/Ghz. Expected CPU utilization is determined using month-to-date average CPU utilization in the Cluster.
The Memory base rate is computed by dividing the Memory cost of the Cluster by the Memory utilization of the Cluster. Memory base rate is then prorated by dividing the Memory base rate by expected memory use percentage to arrive at true base rate for VMs. For example, if the Memory for of the Cluster is $100 and the Memory Capacity is 100GB, the Memory base rate would be $1/GB. Now if the expected Memory utilization in the Cluster is 70%, we take the $1/GB and divide it by .7 to get a normalized base rate of $1.70/GB. Expected Memory utilization is determined using month-to-date average Memory utilization in the Cluster.
You can either provide the expected CPU and Memory usage rates or you can use the actual CPU and memory usage values.
The base rates are monthly rates.
In this model, there is no unallocated cost, meaning if there is a single VM in a Cluster, that VM will be assigned the entire cost of the Cluster. This is less frequently used for just this reason, Clusters with fewer VMs tend to skew VM costs higher.
The third way the Cost Engine calculates base rates is using Allocation, which is enabled in Policies by setting your overcommit ratios.

Keep in mind that the Cost Engine will always be using one of the previous methods in addition to this Allocation based costing (should you choose to enable it). The allocation based costing works like this.
vCPU base rate = Cluster CPU cost / number of vCPUs in the Cluster = B1
Memory base rate = Cluster Memory cost / total vMemory in the Cluster = B2
The cost computation is then based on your overcommit ratio. For example, if the your CPU overcommit ratio is 4:1 and there are a total of 6 CPUs in your Cluster, then your vCPU count would be 24.
The VM cost in this case would be: vCPU allocated x B1 + vRAM allocated x B2 + storage cost + direct cost (as defined in cost drivers).
Now, when you go to look at your VM Costs in Operations you will see several different Cost metrics.

The Demand based metrics (or those without Effective in their name) are the non-Allocation based costs. As you can see above, they are the same in this case as I had my CPU overcommit ratio set to 1-1. If overcommit ratios haven't been set in your Policy, the Effective cost metrics will match the others as well.
Now, if I set my CPU overcommit ratio to 5:1, indicating i have 5x as many vCPU to allocate, this will bring my vCPU base rate down, and thus the effective CPU costs.
My recommendation: use either the default "Cluster Usable Capacity after HA and Buffer" or set your Overcommit ratios and use the "effective" cost metrics.
Thanks to Kruti Rao Erraguntala for her review and valuable insight on this blog!
Comments