Getting VM Performance Metrics via API

Getting VM Performance Metrics via API

Table of Contents

Some time ago I detailed the steps required to Get Performance Stats with the Nutanix API v2.0. In that article, the main point of focus was storage-related metrics, specifically performance numbers for a Nutanix cluster’s storage container.

While the functionality of that information is fine on its own, it is important to be aware of the differences between that and getting performance statistics for our most common entity: a virtual machine. That’s what we’ll talk about today.

A little background

Those that have read previous articles like Nutanix API versions – What are they and what does each one do? will know we have a number of different APIs. How you access each API and where you access each API from will dictate the functionality available to you. For example:

  • Infrastructure-wide reporting for multiple clusters: v3 API accessed only via Prism Central
  • Storage container performance information: v2.0 API accessed only via Prism Element

VM performance metrics, the topic of today’s article is accessed via v1 API and only via Prism Element. Let’s get started.

Requesting VM-specific information

We’re not going to mess around too much today so let’s take a look at the following API request URI.

GET https://{{cluster_ip}}:9440/PrismGateway/services/rest/v1/vms/ \
cd2c7b17-68b9-4670-866f-dd27e2b69804

What does that request do?

  • Sends an API request to the cluster virtual IP address indicated by {{cluster_ip}}.
  • Sends the request via port 9440, just the same as v2.0 and v3 are accessed today.
  • Specifically requests the vms API.
  • Specifically requests information about a VM with UUID cd2c7b17-68b9-4670-866f-dd27e2b69804.
  • In our environment the VM with UUID cd2c7b17-68b9-4670-866f-dd27e2b69804 is named Windows2016-Metrics.

In my test environment, replacing {{cluster_ip}} with my cluster IP address of 10.133.16.50, we’ll get the following response. Note all requests in this article have been carried out using Postman.

Response from v1 API for VM with UUID cd2c7b17-68b9-4670-866f-dd27e2b69804

Note the two objects indicated by arrows:

  • The VM name, Windows2016-Metrics, as mentioned earlier.
  • The most important object in the response: stats.

If we isolate the stats section alone, you’ll see there’s a decent list of performance-related stats (metrics) available to us.

“stats” section of a “vms” request for a specific VM

Sample Types

When making requests similar to the example above, we have a number of choices.

  • A request that does not specify any particular metric. This is the request shown above.
  • A request that specifies exactly which metric we are interested in. For example, hypervisor.cpu_ready_time_ppm.
  • As a snapshot or point-in-time value, i.e. the metric value at the time of the request.
  • As a series of samples starting from a specified date and time and continuing until the date and time the request was submitted. Example: performance stats for the last 24 hours.
  • As a series of samples starting from a specified date and time and continuing until a specified date and time. Example: performance stats between 0000 and 0100 on September 20th 2019.

As we look at some examples below, please note the use of placeholders such as {{vm_metric}} and {{cluster_ip}}. In a real-world situation these would be replaced with values appropriate for your environment. Please also note the escape character used to split long lines.

Snapshot/point-in-time metrics

The request below returns a specific metric at the time of the request.

GET https://{{cluster_ip}}:9440/PrismGateway/services/rest/v1/vms/ \
{{vm_uuid}}/stats/?metrics={{vm_metric}}

Specifying the sample start time

The request below returns a specific metric but specifies that the results are sampled starting at a specific time and continuing until the time the request was submitted.

GET https://{{cluster_ip}}:9440/PrismGateway/services/rest/v1/vms/ \
{{vm_uuid}}/stats/?metrics={{vm_metric}}&startTimeInUsecs={{startTimeInUsecs}}

Specifying both sample start time and sample end time

The request below returns a specific metric but specifies the results are to start at a specific time and end at a specific time.

GET https://{{cluster_ip}}:9440/PrismGateway/services/rest/v1/vms/ \
{{vm_uuid}}/stats/?metrics={{vm_metric}}&&startTimeInUsecs= \
{{startTimeInUsecs}}&endTimeInUsecs={{endTimeInUsecs}}

Start and end times – usecs

In the requests above you’ll notice the use of the following variables:

  • startTimeInUsecs
  • endTimeInUsecs

These are adjusted Unix time values, represented here in usecs (microseconds) elapsed since the Unix Epoch i.e. 00:00:00 UTC on January 1st, 1970 (minus leap seconds). How you convert a human-readable date & time into microseconds elapsed since the Unix Epoch will vary based on the language/script being used, but there are also a number of free online tools that can help with the conversion. In the past I’ve used the Epoch & Unix Timestamp Converter from freeformatter.com and simply adjusted the value by adding sufficient zeroes in order to represent microseconds.

Specifying sample interval

When requesting metrics that specify the start/end times, we can also specify the sample interval in seconds. The request below returns a specific metric, specifies the results are to start and end at specific times but also that the results should be sampled every 30 seconds.

GET https://{{cluster_ip}}:9440/PrismGateway/services/rest/v1/vms/ \
{{vm_uuid}}/stats/?metrics={{vm_metric}}&&startTimeInUsecs= \
{{startTimeInUsecs}}&endTimeInUsecs={{endTimeInUsecs}}&intervalInSecs=30

Constructing a complete request

Now that we’ve seen the various request types and are familiar with how those requests can be made, let’s look at the construction of a complete request. This request will be similar to those shown above but placeholder variables will be replaced with actual values. This will make it easier for us to look at a real response later.

For our complete request, the following values will be used.

  • {{cluster_ip}} – 10.0.0.1
  • {{vm_uuid}} – cd2c7b17-68b9-4670-866f-dd27e2b69804
  • {{vm_metric}} – hypervisor.cpu_ready_time_ppm
  • {{startTimeInUsecs}} – 1569024000000000 i.e. 21/09/2019, 10:00:00
  • {{endTimeInUsecs}} – 1569024300000000 i.e. 21/09/2019, 10:05:00
  • {{intervalInSecs}} – 30

Please remember to replace these values with those appropriate for your environment.

With those values substituted, the complete request URI is as follows.

GET https://10.0.0.1:9440/PrismGateway/services/rest/v1/vms/ \
cd2c7b17-68b9-4670-866f-dd27e2b69804/stats/? \
metrics=hypervisor.cpu_ready_time_ppm \
&intervalInSecs=30 \
&startTimeInUsecs=1569024000000000 \
&endTimeInUsecs=1569024300000000

Specifying multiple metrics

Within a single request, multiple metrics can also be specified as a comma-separated list. For example, the above request can be written to include both hypervisor.cpu_ready_time_ppm and controller.summary_read_source_ssd_bytes_per_sec as:

GET https://10.0.0.1:9440/PrismGateway/services/rest/v1/vms/ \
cd2c7b17-68b9-4670-866f-dd27e2b69804/stats/? \
metrics=hypervisor.cpu_ready_time_ppm,controller.summary_read_source_ssd_bytes_per_sec \
&intervalInSecs=30 \
&startTimeInUsecs=1569024000000000 \
&endTimeInUsecs=1569024300000000

Looking at the response

After submitting the above request, the stats can be accessed via the statsSpecificResponses.values object. Take a look at the sample response below.

Response from our constructed v1 API “stats” request

Processing the response

Here’s what we have accomplished so far.

  • Constructed an API v1 GET request to get VM-specific performance metrics.
  • Specified which metric we are interested in, i.e. hypervisor.cpu_ready_time_ppm.
  • Specified that the performance metrics are to be sampled every 30 seconds.
  • Specified that the performance metrics are to be sampled from 10:00:00 until 10:05:00 on September 21st 2019

How you choose to process or work with those results is up to you. Examples could include:

  • A custom dashboard showing only specific performance metrics.
  • A report that generates a custom HTML page for use on an internal web server.
  • A report that produces machine-readable information for use with a third-party integration.
  • Creation of a CSV-style report for use in spreadsheet applications.

How can you try this for real?

We understand the importance of being able to get VM and related information in code. After all, it is probably one of the most common uses of an API read request. What is happening right now? What happened before?

The Nutanix Developer Portal, aside from blog articles and code samples, makes available a list of complete hands-on labs. A number of these labs are specifically written with custom dashboards and reports in mind. For example:

  • Python 3 reporting lab: Create a very detailed Python 3 script that generates a Prism Central environment report in HTML format.
  • PHP Dashboard: Use the Nutanix REST APIs to build a custom dashboard based on Laravel PHP.
  • Python Flash Dashboard: Use the Nutanix REST APis to build a custom dashboard based on Python Flask.

In addition these specific report-based labs, the labs page provides a Dev Environment Setup Lab that walks you through preparing your laptop/workstation for the Nutanix Developer labs.

Wrapping up

Thanks for taking the time to read this slightly more detailed article.

Have a great day! 🙂

© 2024 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product, feature and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. Other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). This post may contain links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such a site. Certain information contained in this post may relate to or be based on studies, publications, surveys and other data obtained from third-party sources and our own internal estimates and research. While we believe these third-party studies, publications, surveys and other data are reliable as of the date of this post, they have not independently verified, and we make no representation as to the adequacy, fairness, accuracy, or completeness of any information obtained from third-party sources.