Blog Post

Top 10 End-User Experience Monitoring Trends

Published
December 26, 2017
#
 mins read
By 

in this blog post

It was a big year for end-user experience monitoring in 2017, with more businesses baselining, detecting, identifying, escalating, and fixing performance issues that can disrupt their customer’s or end user’s experience. With the rise of high profile performance issues such as Amazon’s $150 million S3 issue or Macy’s Black Friday and Cyber Monday “mini-outages,” the negative impact goes beyond revenue to damaged reputation and loyalty.

According to Forrester, 40% of consumers have a high willingness and ability to shift spend, with an additional 25% building that mindset. Customers can reward or punish companies based on a single experience – a single moment in time. It’s no wonder that, according to one study in Forbes, 75% of companies said their number one objective was to improve customer experience.

Here’s my take on what this means for end-user experience monitoring in 2018:

1. Reset of the Customer-Centric CIO

The difference with customer-centric CIO from 2007 to now can be summed up in one word: digitalization. Cloud and mobility are accelerating a pace of unprecedented business disruption where 70% of the top 10 global companies are new. Companies like Amazon, Southwest Air, Apple, Disney, TD Bank, and others are fanatically focusing on customers and raising the stakes where, according to Walker Information’s Customers 2020 report, customer experience will overtake price and product as the key brand differentiator by 2020.

In 2018, CIOs will embed customer-centricity into their organization’s DNA, dramatically shifting IT’s mindset from internal management to delivering amazing customer experiences. CIO’s will (and must) become change leaders, building customer-centric IT capabilities to attain far broader business objectives such as customer experience and revenue. This means both IT innovation and renovation initiatives will be geared towards and measured against these outcomes, expanding end-user experience monitoring as the “central nervous system” of digital performance management.

2. Rise of Modern Synthetic Monitoring Technology

Synthetic monitoring technology is as old as the World Wide Web. However, the role of it in end-user

top 10 end-user experience monitoring trends

experience monitoring has never been more important. According to Research and Market’s study, the enterprise synthetic application monitoring tool market size is expected to grow to $2.1 billion by 2021 or over 18% annually, growing significantly faster than code level or infrastructure monitoring technology.

In 2018, the critical need to simulate users’ interactions with increasing complex digital services running on increasing dynamic, distributed, and heterogeneous environments is spurring the rise of modern synthetic monitoring technology. But the reasons for this comeback are more about what it can do for end-user experience monitoring than its “website monitoring” ancestor.  Modern synthetic monitoring technology can:

  1. Proactively identify performance issues before customers or users are impacted
  2. Test most any element traversing the internet including third party services and network protocols
  3. Analyze multiple factors affecting speed, availability, and reliability in real time at painstaking granularity, and automatically guide troubleshooting diagnosis
  4. Eliminate the “noise” and false alerts associated with older synthetic technology

Further reading: 5 Reasons Synthetic Monitoring is More Important than Ever

3. SaaS Monitoring Gains Big Traction

SaaS adoption is becoming a business imperative – “if you aren’t using SaaS broadly, your business risks falling behind,” titled a recent Forrester report. In fact, a recent study conducted by IDG Research shows that 90% of all organizations today either have apps running in the cloud or are planning to use cloud apps within 12 months. But moving to SaaS does not relieve IT from delivering business value. Just because you didn’t monitor your hosted Exchange before, does not mean you don’t need to monitor Office 365 now. While you’re no longer on the hook for code maintenance, who do your users call when your Office 365 service is hindering business productivity in your San Francisco office, or your Salesforce service experiences mini-outages in some of your call centers? Adding to this storm, over 90% of the delivery paths of SaaS services are beyond your firewall and outside of your control.

For 2018:  Business demand to monitor their user’s experience of SaaS applications will become an end-user experience monitoring imperative. Monitoring your SaaS provider’s speed, availability and reliability in your physical locations or wherever employees are located will be the new normal. This includes telemetry and analytics to drill down and troubleshoot a host of moving parts that can degrade SaaS performance and availability including end-to-end path visibility starting with a user, traveling through the network, and to the application. As an aside, this is where modern synthetic monitoring technology has a huge advantage to help. While APM and other traditional monitoring technologies can only monitor systems within your infrastructure, modern synthetic monitoring technology can see how your SaaS apps are performing from your user’s lens, almost always before there is a widespread impact.

Further reading:  State of SaaS Report

4. RUM and Synthetic Unite!

The problem is the dominant discussion about these two technologies has been largely binary – either synthetic monitoring or RUM, synthetic monitoring vs RUM. But according to Gartner’s Innovation Insight for Digital Experience Monitoring report, “Traditionally, the various end-user experience monitoring data ingestion mechanisms have been deployed separately from one another and sometimes heated arguments have been had as to which mechanism is the most optimal. The truth is that each ingestion mechanism has something to contribute to the observation and understanding of how users, customers, and others interact with an enterprise application portfolio.”

The combination of synthetic monitoring and RUM will gain traction in 2018 as businesses learn how the two complement each other in their end-user experience monitoring strategy. Most notably that synthetic monitoring allows you to simulate and test real-world interactions of your users, helping preemptively drill down into causal factors instead of waiting for shopping carts to be abandoned. At the same time, RUM lets you see how your website responds to actual users, helping validate and/or tune your synthetic test results and telemetry.

Further reading: Observe All That Matters

5. Monitoring the API Economy

Forbes said 2017 is the Year of the API Economy, with shifts occurring in how APIs are consumed, integrated into platforms and enriched with greater potential to provide contextual intelligence for customers. According to API University’s directory, there are over 14,000 public APIs that can be used to deliver new workflows, products, and business models such as omnichannel selling. As APIs gain traction as the “enabler of high-speed digital business innovation and renovation,” the complexity of orchestrating them in real-time with 3rd party data aggregators, mobile service providers, social media, and so on, increases.

With the rise of modern end-user experience monitoring and demand of digital business, modern API monitoring will multiply in 2018. While frameworks like Flask and Express can enable developing APIs in minutes, monitoring third-party web services is another thing. Triggering alerts of API performance degradation and SLA breaches will become table stakes in monitoring the user’s experience – businesses will bulk up on the API monitoring capabilities to give them the granular analytics to determine which web service (yours or the third party’s) is causing the performance issues before it degrades user experience and hurts business outcomes.

Further reading: Web Performance 101: Monitoring APIs

6. SLA Management Comes of Age

Digitalization, cloud, and mobility (and IoT is around the corner) are bringing a torrent of third-party service dependencies such as SaaS, DNS, CDN, and even APIs, changing the nature of end-user experience monitor from less about managing monolithic applications to more about governing such services in the context of the end-user experience. For example, when your application is experiencing micro-outages, do you have the monitoring telemetry to identify the problem a specific third-party service and NOT your web page? How do you effectively measure the performance of third-party services and implementing that technique in SLAs? Do you have the ability to promptly alert and accurately report on SLA compliance?

An external or internal SLA breach includes lost revenue, productivity, and legal penalties – add any loss of brand goodwill and return customers, and total costs can easily reach into the millions. For 2018, demand to implement modern end-user experience monitoring tooling to include comprehensive third-party providers instrumentation will rise, but it is only the first step to tackling modern SLA management. IT ops leaders will also build requisite skills and processes to hold third part providers accountable when they breach your service level agreements (SLAs).

Further reading: A Practical Guide to SLAs; 3 Tips For Informed SLA Management

7. EUEM Overshadows APM

APM was supposed to be the darling of end-user experience monitoring until everything starting to move to the cloud (SaaS, APIs, network paths, microservices, etc). APM solutions are designed to monitor application services where they have direct access to the code – they are not built to monitor services where the majority of the service infrastructure lies outside the IT periphery. And according to Forbes, “While current monitoring tools typically rely on application performance monitoring (APM), these metrics aren’t dynamic or granular enough to provide line-of-business value…to understand software issues experienced by end users.”

The adoption of modern end-user or digital experience monitoring (DEM) tools will surge in 2018 as more digital services move outside firewalls and APM becomes less tenable in terms cost. Modern end-user experience monitoring solutions are specifically designed to monitor performance and availability from the user’s perspective, filling the increasing gap (due to the cloud) left by traditional application discovery, tracing and diagnostics. Through active and passive monitoring of a host of service infrastructure outside the application code, businesses can “deep dive” into troubleshooting and root-cause analysis with granular visibility into issues that impact user experience and business outcomes.

Further reading: [Closing the End-User Experience Gap in APM](https://pages.catchpoint.com/rs/005-RHC-551/images/Catchpoint Systems - Gartner - Closing the End User Experience Gap in APM.pdf); Closing Costly Visibility Gaps in Application Performance Management

8.  The “New Network” Monitoring

Relying on the Internet and cloud to deliver applications significantly changes user traffic patterns. Traditional network traffic, which was generated by an end user accessing a centralized data center, is now generated by an extremely diverse set of network protocols going to and from a diverse set of locations where data is accessed. And poor performing network protocols or DNS can provoke widespread end-user experience dissatisfaction and negatively impact business outcomes.  Unfortunately, according to Gartner, “the vast majority of (traditional packet and flow) network monitoring technologies deployed today leave significant visibility gaps.”

For 2018, end-user experience monitoring strategies that include monitoring cloud-centric network elements will gain solid traction, especially applicable for public cloud or SaaS environments, and the monitoring of non-office-based user traffic. As more business is moved to the cloud and user mobility increases, business will augment their user or digital experience monitoring instrumentation to be able to effectively troubleshoot new network elements such as route health, Border Gateway Protocol (BGP), TCP connectivity, DNS traversing, IPv4, IPv6, and network time protocol (NTP).

Further reading: Is It Time to Rethink Your Network Monitoring Strategy?; Troubleshooting Network Protocols in a Complex Digital Environment; The Network’s Impact on End-User Experience

9. AI: It’s the Data, S&^$!

This is not just another tabloid opinion on how AI (Artificial Intelligence) is redefining monitoring. Yes, the use of AI is gaining traction in nearly every area of IT operations, where Gartner forecasts that by 2019, “25% of global enterprises will have strategically implemented an AIOps platform supporting two or more major IT operations functions.” AIOps platforms such as Sumo Logic and Splunk use AI to discover patterns from very large data sets from log files, service desk and, increasingly, various monitoring practices.

By contrast, use of AI for end-user experience monitoring is 2018 is more about the quality of the data ingested since modern synthetic monitoring itself is a predictive technology, using “robots” to simulate users’ interactions (including the location and network from which they are accessing your services) to identify potential issues before your users are disrupted widespread. Bad and/or noisy data (often found in legacy “web monitoring” systems) means a deluge of false positives, false negatives, and endless war room hours and finger-pointing. Adding to user experience monitoring increasing the complexity of digital services running on increasing dynamic, distributed, and heterogeneous environments and it’s easy to see why AI for end-user experience monitoring is more about the hard stuff: the data.

Further reading: Actionable Insights with Guided Intelligence; Reducing MTTR

10. The Amazon Effect

The new normal for customer experience is digital anything, instant everything. The Amazon effect is causing customers (and increasingly employees) to expect the same experience regardless of what they buy even healthcare. According to Ingrid Lindberg, president of loyalty marketing and customer experience consultancy at Kobie Marketing and former chief experience officer at Cigna (CI), “Consumers are not comparing their experience between health care providers or insurance companies. Instead, they’re measuring customer experience everywhere they go. In effect, the experience at CVS and Aetna is being compared to that of Zappos, Marriott (MAR) and Nordstrom (JWN).”

The Amazon Effect has everything to do with modern end-user experience monitoring in 2018 as IT ops fundamentally shifts from an inward mindset to manically focusing on delivering successful customer (or in the case of internal services, employees) experiences. Customer-centric CIOs will dispel the “I don’t need:

  • modern end-user experience monitoring. I have APM and infrastructure monitoring.”
  • synthetic monitoring. I have RUM. I don’t need RUM. I have synthetic monitoring.”
  • to monitor my user’s experience of my SaaS providers. I have a guaranteed SLA from them.”
  • to monitor the new network. I use packet and flow monitoring.”

and so on. And if there is still doubt about the importance of modern end-user experience monitoring, Gartner has some has some sobering insights by 2020:  50% of CEOs say their industries will be digitally transformed; more than 50% of enterprises will replace core IT operations management tools entirely, and 30% of global enterprises will have strategically implemented end-user or digital experience monitoring  technologies (up from fewer than 5% today).

It was a big year for end-user experience monitoring in 2017, with more businesses baselining, detecting, identifying, escalating, and fixing performance issues that can disrupt their customer’s or end user’s experience. With the rise of high profile performance issues such as Amazon’s $150 million S3 issue or Macy’s Black Friday and Cyber Monday “mini-outages,” the negative impact goes beyond revenue to damaged reputation and loyalty.

According to Forrester, 40% of consumers have a high willingness and ability to shift spend, with an additional 25% building that mindset. Customers can reward or punish companies based on a single experience – a single moment in time. It’s no wonder that, according to one study in Forbes, 75% of companies said their number one objective was to improve customer experience.

Here’s my take on what this means for end-user experience monitoring in 2018:

1. Reset of the Customer-Centric CIO

The difference with customer-centric CIO from 2007 to now can be summed up in one word: digitalization. Cloud and mobility are accelerating a pace of unprecedented business disruption where 70% of the top 10 global companies are new. Companies like Amazon, Southwest Air, Apple, Disney, TD Bank, and others are fanatically focusing on customers and raising the stakes where, according to Walker Information’s Customers 2020 report, customer experience will overtake price and product as the key brand differentiator by 2020.

In 2018, CIOs will embed customer-centricity into their organization’s DNA, dramatically shifting IT’s mindset from internal management to delivering amazing customer experiences. CIO’s will (and must) become change leaders, building customer-centric IT capabilities to attain far broader business objectives such as customer experience and revenue. This means both IT innovation and renovation initiatives will be geared towards and measured against these outcomes, expanding end-user experience monitoring as the “central nervous system” of digital performance management.

2. Rise of Modern Synthetic Monitoring Technology

Synthetic monitoring technology is as old as the World Wide Web. However, the role of it in end-user

top 10 end-user experience monitoring trends

experience monitoring has never been more important. According to Research and Market’s study, the enterprise synthetic application monitoring tool market size is expected to grow to $2.1 billion by 2021 or over 18% annually, growing significantly faster than code level or infrastructure monitoring technology.

In 2018, the critical need to simulate users’ interactions with increasing complex digital services running on increasing dynamic, distributed, and heterogeneous environments is spurring the rise of modern synthetic monitoring technology. But the reasons for this comeback are more about what it can do for end-user experience monitoring than its “website monitoring” ancestor.  Modern synthetic monitoring technology can:

  1. Proactively identify performance issues before customers or users are impacted
  2. Test most any element traversing the internet including third party services and network protocols
  3. Analyze multiple factors affecting speed, availability, and reliability in real time at painstaking granularity, and automatically guide troubleshooting diagnosis
  4. Eliminate the “noise” and false alerts associated with older synthetic technology

Further reading: 5 Reasons Synthetic Monitoring is More Important than Ever

3. SaaS Monitoring Gains Big Traction

SaaS adoption is becoming a business imperative – “if you aren’t using SaaS broadly, your business risks falling behind,” titled a recent Forrester report. In fact, a recent study conducted by IDG Research shows that 90% of all organizations today either have apps running in the cloud or are planning to use cloud apps within 12 months. But moving to SaaS does not relieve IT from delivering business value. Just because you didn’t monitor your hosted Exchange before, does not mean you don’t need to monitor Office 365 now. While you’re no longer on the hook for code maintenance, who do your users call when your Office 365 service is hindering business productivity in your San Francisco office, or your Salesforce service experiences mini-outages in some of your call centers? Adding to this storm, over 90% of the delivery paths of SaaS services are beyond your firewall and outside of your control.

For 2018:  Business demand to monitor their user’s experience of SaaS applications will become an end-user experience monitoring imperative. Monitoring your SaaS provider’s speed, availability and reliability in your physical locations or wherever employees are located will be the new normal. This includes telemetry and analytics to drill down and troubleshoot a host of moving parts that can degrade SaaS performance and availability including end-to-end path visibility starting with a user, traveling through the network, and to the application. As an aside, this is where modern synthetic monitoring technology has a huge advantage to help. While APM and other traditional monitoring technologies can only monitor systems within your infrastructure, modern synthetic monitoring technology can see how your SaaS apps are performing from your user’s lens, almost always before there is a widespread impact.

Further reading:  State of SaaS Report

4. RUM and Synthetic Unite!

The problem is the dominant discussion about these two technologies has been largely binary – either synthetic monitoring or RUM, synthetic monitoring vs RUM. But according to Gartner’s Innovation Insight for Digital Experience Monitoring report, “Traditionally, the various end-user experience monitoring data ingestion mechanisms have been deployed separately from one another and sometimes heated arguments have been had as to which mechanism is the most optimal. The truth is that each ingestion mechanism has something to contribute to the observation and understanding of how users, customers, and others interact with an enterprise application portfolio.”

The combination of synthetic monitoring and RUM will gain traction in 2018 as businesses learn how the two complement each other in their end-user experience monitoring strategy. Most notably that synthetic monitoring allows you to simulate and test real-world interactions of your users, helping preemptively drill down into causal factors instead of waiting for shopping carts to be abandoned. At the same time, RUM lets you see how your website responds to actual users, helping validate and/or tune your synthetic test results and telemetry.

Further reading: Observe All That Matters

5. Monitoring the API Economy

Forbes said 2017 is the Year of the API Economy, with shifts occurring in how APIs are consumed, integrated into platforms and enriched with greater potential to provide contextual intelligence for customers. According to API University’s directory, there are over 14,000 public APIs that can be used to deliver new workflows, products, and business models such as omnichannel selling. As APIs gain traction as the “enabler of high-speed digital business innovation and renovation,” the complexity of orchestrating them in real-time with 3rd party data aggregators, mobile service providers, social media, and so on, increases.

With the rise of modern end-user experience monitoring and demand of digital business, modern API monitoring will multiply in 2018. While frameworks like Flask and Express can enable developing APIs in minutes, monitoring third-party web services is another thing. Triggering alerts of API performance degradation and SLA breaches will become table stakes in monitoring the user’s experience – businesses will bulk up on the API monitoring capabilities to give them the granular analytics to determine which web service (yours or the third party’s) is causing the performance issues before it degrades user experience and hurts business outcomes.

Further reading: Web Performance 101: Monitoring APIs

6. SLA Management Comes of Age

Digitalization, cloud, and mobility (and IoT is around the corner) are bringing a torrent of third-party service dependencies such as SaaS, DNS, CDN, and even APIs, changing the nature of end-user experience monitor from less about managing monolithic applications to more about governing such services in the context of the end-user experience. For example, when your application is experiencing micro-outages, do you have the monitoring telemetry to identify the problem a specific third-party service and NOT your web page? How do you effectively measure the performance of third-party services and implementing that technique in SLAs? Do you have the ability to promptly alert and accurately report on SLA compliance?

An external or internal SLA breach includes lost revenue, productivity, and legal penalties – add any loss of brand goodwill and return customers, and total costs can easily reach into the millions. For 2018, demand to implement modern end-user experience monitoring tooling to include comprehensive third-party providers instrumentation will rise, but it is only the first step to tackling modern SLA management. IT ops leaders will also build requisite skills and processes to hold third part providers accountable when they breach your service level agreements (SLAs).

Further reading: A Practical Guide to SLAs; 3 Tips For Informed SLA Management

7. EUEM Overshadows APM

APM was supposed to be the darling of end-user experience monitoring until everything starting to move to the cloud (SaaS, APIs, network paths, microservices, etc). APM solutions are designed to monitor application services where they have direct access to the code – they are not built to monitor services where the majority of the service infrastructure lies outside the IT periphery. And according to Forbes, “While current monitoring tools typically rely on application performance monitoring (APM), these metrics aren’t dynamic or granular enough to provide line-of-business value…to understand software issues experienced by end users.”

The adoption of modern end-user or digital experience monitoring (DEM) tools will surge in 2018 as more digital services move outside firewalls and APM becomes less tenable in terms cost. Modern end-user experience monitoring solutions are specifically designed to monitor performance and availability from the user’s perspective, filling the increasing gap (due to the cloud) left by traditional application discovery, tracing and diagnostics. Through active and passive monitoring of a host of service infrastructure outside the application code, businesses can “deep dive” into troubleshooting and root-cause analysis with granular visibility into issues that impact user experience and business outcomes.

Further reading: [Closing the End-User Experience Gap in APM](https://pages.catchpoint.com/rs/005-RHC-551/images/Catchpoint Systems - Gartner - Closing the End User Experience Gap in APM.pdf); Closing Costly Visibility Gaps in Application Performance Management

8.  The “New Network” Monitoring

Relying on the Internet and cloud to deliver applications significantly changes user traffic patterns. Traditional network traffic, which was generated by an end user accessing a centralized data center, is now generated by an extremely diverse set of network protocols going to and from a diverse set of locations where data is accessed. And poor performing network protocols or DNS can provoke widespread end-user experience dissatisfaction and negatively impact business outcomes.  Unfortunately, according to Gartner, “the vast majority of (traditional packet and flow) network monitoring technologies deployed today leave significant visibility gaps.”

For 2018, end-user experience monitoring strategies that include monitoring cloud-centric network elements will gain solid traction, especially applicable for public cloud or SaaS environments, and the monitoring of non-office-based user traffic. As more business is moved to the cloud and user mobility increases, business will augment their user or digital experience monitoring instrumentation to be able to effectively troubleshoot new network elements such as route health, Border Gateway Protocol (BGP), TCP connectivity, DNS traversing, IPv4, IPv6, and network time protocol (NTP).

Further reading: Is It Time to Rethink Your Network Monitoring Strategy?; Troubleshooting Network Protocols in a Complex Digital Environment; The Network’s Impact on End-User Experience

9. AI: It’s the Data, S&^$!

This is not just another tabloid opinion on how AI (Artificial Intelligence) is redefining monitoring. Yes, the use of AI is gaining traction in nearly every area of IT operations, where Gartner forecasts that by 2019, “25% of global enterprises will have strategically implemented an AIOps platform supporting two or more major IT operations functions.” AIOps platforms such as Sumo Logic and Splunk use AI to discover patterns from very large data sets from log files, service desk and, increasingly, various monitoring practices.

By contrast, use of AI for end-user experience monitoring is 2018 is more about the quality of the data ingested since modern synthetic monitoring itself is a predictive technology, using “robots” to simulate users’ interactions (including the location and network from which they are accessing your services) to identify potential issues before your users are disrupted widespread. Bad and/or noisy data (often found in legacy “web monitoring” systems) means a deluge of false positives, false negatives, and endless war room hours and finger-pointing. Adding to user experience monitoring increasing the complexity of digital services running on increasing dynamic, distributed, and heterogeneous environments and it’s easy to see why AI for end-user experience monitoring is more about the hard stuff: the data.

Further reading: Actionable Insights with Guided Intelligence; Reducing MTTR

10. The Amazon Effect

The new normal for customer experience is digital anything, instant everything. The Amazon effect is causing customers (and increasingly employees) to expect the same experience regardless of what they buy even healthcare. According to Ingrid Lindberg, president of loyalty marketing and customer experience consultancy at Kobie Marketing and former chief experience officer at Cigna (CI), “Consumers are not comparing their experience between health care providers or insurance companies. Instead, they’re measuring customer experience everywhere they go. In effect, the experience at CVS and Aetna is being compared to that of Zappos, Marriott (MAR) and Nordstrom (JWN).”

The Amazon Effect has everything to do with modern end-user experience monitoring in 2018 as IT ops fundamentally shifts from an inward mindset to manically focusing on delivering successful customer (or in the case of internal services, employees) experiences. Customer-centric CIOs will dispel the “I don’t need:

  • modern end-user experience monitoring. I have APM and infrastructure monitoring.”
  • synthetic monitoring. I have RUM. I don’t need RUM. I have synthetic monitoring.”
  • to monitor my user’s experience of my SaaS providers. I have a guaranteed SLA from them.”
  • to monitor the new network. I use packet and flow monitoring.”

and so on. And if there is still doubt about the importance of modern end-user experience monitoring, Gartner has some has some sobering insights by 2020:  50% of CEOs say their industries will be digitally transformed; more than 50% of enterprises will replace core IT operations management tools entirely, and 30% of global enterprises will have strategically implemented end-user or digital experience monitoring  technologies (up from fewer than 5% today).

This is some text inside of a div block.

You might also like

Blog post

Performing for the holidays: Look beyond uptime for season sales success

Blog post

Catch frustration before it costs you: New tools for a better user experience

Blog post

Lessons from Microsoft’s office 365 Outage: The Importance of third-party monitoring