Blog Post

6 Key Factors to Consider in Your APM Strategy

Published
March 15, 2016
#
 mins read
By 

in this blog post

Originally featured as a slideshow presentation on ITBusinessEdge.com.

The dynamic nature of IT Ops requires its professionals to be incredibly agile. If your systems aren’t as updated as the industry, your business is bound to fall behind.

This is especially true for APM. Your business is only as strong as your performance; the strength of which is determined almost entirely by your strategy.

So, what do you need to know about monitoring as you take on the new era of digital business?

Here are six key points you need to consider:

Get As Close to Your Users As Possible

The performance and availability of websites, mobile sites and applications tends to degrade the further away the user is from your data center. For this reason, getting the most accurate view of real-world experiences depends on measuring performance and availability as geographically close to users as possible. In other words, you can’t expect to monitor applications from your data center in North America, and assume that users in China, Germany, or other far-away regions are having a great experience. Web complexity is again to blame – the further away a user is from the data center, the more elements (CDNs, regional and local ISPs, caching services and more) there are that can impact the user’s last-mile experience.

Don’t Leave Your Internal Users Out

While the fundamental element in the digital transformation is customers, don’t forget the importance of ensuring high-performing, highly available applications for your employees, particularly in remote offices far away from the data center. Many of these applications ultimately serve a customer-facing purpose (for instance, a bank teller’s app in a remote branch), and poor performance can hurt your brand’s reputation, and diminish worker productivity, just as much as a poorly performing web app.

Reactive Monitoring Is Dead

Traditional approaches to APM have always emphasized detecting and diagnosing problems. Today, this is no longer good enough. By the time a performance issue has occurred, it’s too late, with customers taking to social networks to vent their frustration. In addition, increased IT complexity makes it harder than ever to identify and pinpoint the source of problems. As you evolve your APM strategies, you need to consider combining a wealth of historical data with advanced analytics, enabling them to proactively identify growing hot spots and prevent problems from happening in the first place. For example, does a particular application slow down on a certain day, time of year, or in a particular geography? In addition, these analytics must be able to precisely identify the source of problems. Without this capability, you’ll find yourself drowning in data, but with no real actionable insights.

IT Operational Excellence

IT operational excellence used to mean delivering “good enough” application performance and reliability, with the smallest amount of fully maximized resources. In the new, customer-centric digital business paradigm, IT operational excellence needs to be redefined – optimizing IT to support the best possible customer experience. One example, in a virtualized environment there may be a certain level of CPU utilization (less than 100 percent) where application performance begins to suffer. In this example, 100 percent utilization is not the ideal. The advanced analytics capabilities described previously can help organizations discover these thresholds. IT operational excellence – in the context of supporting the best possible customer experience – is a significant benefit of an evolved APM strategy.

Pay Attention to Third-Party Services

While third-party services are intended to deliver richer, more satisfying customer experiences that help drive conversions, they can wreak havoc on your business if not properly managed. Some third-party services like marketing analytics are mandatory, but others may not be. There’s no point in offering customers “nice to have” functionality if it prevents them from accessing the web page in the first place. So another key component of modern APM is to parse out various third-party services in real time, see how they’re impacting customers and modify or quickly remove them if necessary. The advanced analytics mentioned earlier can also help you drill down and see when a third-party service is a root cause of a performance issue.

Comprehensive User Monitoring

Synthetic monitoring, which monitors website availability and performance by generating synthetic-user traffic from cloud resources in various geographies, can provide a measure of peace of mind. You know your website, mobile site, and applications are available and can understand load times for users across a wide range of geographies. However, synthetic monitoring does not tell the whole story, because it does not show what actual users are doing – and what they’re experiencing – within the site or application, especially in examples of infrequent actions or paths.

Real-user monitoring can supplement this view by helping you understand your customers’ most common landing pages and conversion paths, and what parts of your site must be prioritized for optimization. However, it can be a mistake to rely on real-user monitoring alone, as it doesn’t provide the most comprehensive, accurate picture of web page and application response time.

So combining synthetic with real-user monitoring is the best way for you to truly understand availability and performance, and identify areas for optimizations – both in terms of user geographies and website real estate.

Originally featured as a slideshow presentation on ITBusinessEdge.com.

The dynamic nature of IT Ops requires its professionals to be incredibly agile. If your systems aren’t as updated as the industry, your business is bound to fall behind.

This is especially true for APM. Your business is only as strong as your performance; the strength of which is determined almost entirely by your strategy.

So, what do you need to know about monitoring as you take on the new era of digital business?

Here are six key points you need to consider:

Get As Close to Your Users As Possible

The performance and availability of websites, mobile sites and applications tends to degrade the further away the user is from your data center. For this reason, getting the most accurate view of real-world experiences depends on measuring performance and availability as geographically close to users as possible. In other words, you can’t expect to monitor applications from your data center in North America, and assume that users in China, Germany, or other far-away regions are having a great experience. Web complexity is again to blame – the further away a user is from the data center, the more elements (CDNs, regional and local ISPs, caching services and more) there are that can impact the user’s last-mile experience.

Don’t Leave Your Internal Users Out

While the fundamental element in the digital transformation is customers, don’t forget the importance of ensuring high-performing, highly available applications for your employees, particularly in remote offices far away from the data center. Many of these applications ultimately serve a customer-facing purpose (for instance, a bank teller’s app in a remote branch), and poor performance can hurt your brand’s reputation, and diminish worker productivity, just as much as a poorly performing web app.

Reactive Monitoring Is Dead

Traditional approaches to APM have always emphasized detecting and diagnosing problems. Today, this is no longer good enough. By the time a performance issue has occurred, it’s too late, with customers taking to social networks to vent their frustration. In addition, increased IT complexity makes it harder than ever to identify and pinpoint the source of problems. As you evolve your APM strategies, you need to consider combining a wealth of historical data with advanced analytics, enabling them to proactively identify growing hot spots and prevent problems from happening in the first place. For example, does a particular application slow down on a certain day, time of year, or in a particular geography? In addition, these analytics must be able to precisely identify the source of problems. Without this capability, you’ll find yourself drowning in data, but with no real actionable insights.

IT Operational Excellence

IT operational excellence used to mean delivering “good enough” application performance and reliability, with the smallest amount of fully maximized resources. In the new, customer-centric digital business paradigm, IT operational excellence needs to be redefined – optimizing IT to support the best possible customer experience. One example, in a virtualized environment there may be a certain level of CPU utilization (less than 100 percent) where application performance begins to suffer. In this example, 100 percent utilization is not the ideal. The advanced analytics capabilities described previously can help organizations discover these thresholds. IT operational excellence – in the context of supporting the best possible customer experience – is a significant benefit of an evolved APM strategy.

Pay Attention to Third-Party Services

While third-party services are intended to deliver richer, more satisfying customer experiences that help drive conversions, they can wreak havoc on your business if not properly managed. Some third-party services like marketing analytics are mandatory, but others may not be. There’s no point in offering customers “nice to have” functionality if it prevents them from accessing the web page in the first place. So another key component of modern APM is to parse out various third-party services in real time, see how they’re impacting customers and modify or quickly remove them if necessary. The advanced analytics mentioned earlier can also help you drill down and see when a third-party service is a root cause of a performance issue.

Comprehensive User Monitoring

Synthetic monitoring, which monitors website availability and performance by generating synthetic-user traffic from cloud resources in various geographies, can provide a measure of peace of mind. You know your website, mobile site, and applications are available and can understand load times for users across a wide range of geographies. However, synthetic monitoring does not tell the whole story, because it does not show what actual users are doing – and what they’re experiencing – within the site or application, especially in examples of infrequent actions or paths.

Real-user monitoring can supplement this view by helping you understand your customers’ most common landing pages and conversion paths, and what parts of your site must be prioritized for optimization. However, it can be a mistake to rely on real-user monitoring alone, as it doesn’t provide the most comprehensive, accurate picture of web page and application response time.

So combining synthetic with real-user monitoring is the best way for you to truly understand availability and performance, and identify areas for optimizations – both in terms of user geographies and website real estate.

This is some text inside of a div block.

You might also like

Blog post

Lessons from Microsoft’s office 365 Outage: The Importance of third-party monitoring

Blog post

When SSL Issues aren’t just about SSL: A deep dive into the TIBCO Mashery outage

Blog post

Preparing for the unexpected: Lessons from the AJIO and Jio Outage