Blog Post

Webhook Setup and Implementation

Published
June 15, 2017
#
 mins read
By 

in this blog post

There’s no question that among the growing digital landscape, APIs play an indispensable role in allowing different tools, platforms, and devices to integrate with one another as seamlessly as possible. However, when devops and SRE teams are charged with monitoring the performance of many digital properties under their company’s control, traditional API offerings are sometimes not enough to meet their needs.

With traditional REST APIs such as the one available in Catchpoint’s standard API suite, users are able to retrieve, modify, and analyze their performance data directly in the Catchpoint platform. However, as with most REST-based APIs, large enterprises such as Google can run into system limits that cap the number of requests which they can make in a given period of time.

Therefore, to satisfy the needs of clients who must collect large amounts of data in real time, Catchpoint offers the Test Data Webhook, which functions as a push API to automatically deliver performance data to a designated endpoint without having to make a specific API call.

In doing so, the user is able to collect, store, visualize, and analyze their performance data as it comes in, either through their own virtual private server or through ready-made integrations with cloud-based storage and visualization tools. It also means that the data can be stored for even longer than the three years that the Catchpoint platform allows, and does so without any system limits, which is a huge advantage when tasked with monitoring the performance of vast digital architectures.

Setting up a Test Data Webhook is a simple process within the Catchpoint platform, wherein the user simply enters the URL of an endpoint that they have configured. Once enabled, the raw data can be pushed out in either JSON or XML formats and put through an ETL (Extract, Transform, Load) process on the receiving end, or in the form of a pre-formatted template, which removes the need for an ETL setup on the other end. Simple template macros can be used to format the data, e.g. by test name, test type, or specific metrics such as webpage response or document complete.

Whether or not a user chooses to use the template offering or simply extract the raw data, they are able to use any number of storage and visualization tools to do their data analysis. In the case of storage, that can mean either a private server, or cloud-based tools like Google’s BigTable or BigQuery, Oracle Data Cloud, Microsoft Azure, AWS, or Heroku. And to visualize the data, some of the most common tools are Google Data Studio, Grafana, iDashboards, Power BI, and Geckoboard. All of them are able to query the databases where the data is being stored and build out advanced visualizations of it.

Google’s SRE team utilizes the Test Data Webhook by extracting their performance data in the raw JSON format, and then performing the ETL process within their own Google Cloud platform, using BigTable to store the data and Data Studio to visualize it. The fact that this process happens in real time allows them to detect performance issues in multiple digital properties under their control much faster than if they were relying on a REST API.

Such was the case when they were able to detect issues of query latency from a network perspective in the Google Public DNS service, specifically by identifying some ASNs that experienced the most latency. From there, the SRE team could drill down directly to where the problem was, rather than having to go back and forth between the ISP that had reported the problem and their customers, and then the SRE support team. Ultimately, they were able to detect and fix the problem in just a few minutes, when it ordinarily could have taken close to an hour.

There’s no question that among the growing digital landscape, APIs play an indispensable role in allowing different tools, platforms, and devices to integrate with one another as seamlessly as possible. However, when devops and SRE teams are charged with monitoring the performance of many digital properties under their company’s control, traditional API offerings are sometimes not enough to meet their needs.

With traditional REST APIs such as the one available in Catchpoint’s standard API suite, users are able to retrieve, modify, and analyze their performance data directly in the Catchpoint platform. However, as with most REST-based APIs, large enterprises such as Google can run into system limits that cap the number of requests which they can make in a given period of time.

Therefore, to satisfy the needs of clients who must collect large amounts of data in real time, Catchpoint offers the Test Data Webhook, which functions as a push API to automatically deliver performance data to a designated endpoint without having to make a specific API call.

In doing so, the user is able to collect, store, visualize, and analyze their performance data as it comes in, either through their own virtual private server or through ready-made integrations with cloud-based storage and visualization tools. It also means that the data can be stored for even longer than the three years that the Catchpoint platform allows, and does so without any system limits, which is a huge advantage when tasked with monitoring the performance of vast digital architectures.

Setting up a Test Data Webhook is a simple process within the Catchpoint platform, wherein the user simply enters the URL of an endpoint that they have configured. Once enabled, the raw data can be pushed out in either JSON or XML formats and put through an ETL (Extract, Transform, Load) process on the receiving end, or in the form of a pre-formatted template, which removes the need for an ETL setup on the other end. Simple template macros can be used to format the data, e.g. by test name, test type, or specific metrics such as webpage response or document complete.

Whether or not a user chooses to use the template offering or simply extract the raw data, they are able to use any number of storage and visualization tools to do their data analysis. In the case of storage, that can mean either a private server, or cloud-based tools like Google’s BigTable or BigQuery, Oracle Data Cloud, Microsoft Azure, AWS, or Heroku. And to visualize the data, some of the most common tools are Google Data Studio, Grafana, iDashboards, Power BI, and Geckoboard. All of them are able to query the databases where the data is being stored and build out advanced visualizations of it.

Google’s SRE team utilizes the Test Data Webhook by extracting their performance data in the raw JSON format, and then performing the ETL process within their own Google Cloud platform, using BigTable to store the data and Data Studio to visualize it. The fact that this process happens in real time allows them to detect performance issues in multiple digital properties under their control much faster than if they were relying on a REST API.

Such was the case when they were able to detect issues of query latency from a network perspective in the Google Public DNS service, specifically by identifying some ASNs that experienced the most latency. From there, the SRE team could drill down directly to where the problem was, rather than having to go back and forth between the ISP that had reported the problem and their customers, and then the SRE support team. Ultimately, they were able to detect and fix the problem in just a few minutes, when it ordinarily could have taken close to an hour.

This is some text inside of a div block.

You might also like

Blog post

Lessons from Microsoft’s office 365 Outage: The Importance of third-party monitoring

Blog post

When SSL Issues aren’t just about SSL: A deep dive into the TIBCO Mashery outage

Blog post

Preparing for the unexpected: Lessons from the AJIO and Jio Outage