Blog Post

Are Algorithms Better Than Humans?

The rise of big data and a desire to measure everything has led to an environment suited for the use of algorithms. But who should make the final decision?

We make decisions on a daily basis; sometimes those decisions are good, and sometimes they aren’t. Decisions can be hasty and impulsive, or made after vast amounts of data have been analyzed. This is part of being human, and hopefully, we learn from the decisions we make.

We often look to others, and more recently to computers and their algorithms, to help us make decisions. When you are trying to find a book to read, Goodreads can provide suggestions based on what you have read previously and the current trending titles. Netflix helps you decide what to watch on TV by providing recommendations based on your viewing history. Not to mention the recently trending tweet regarding two MIT grads who built an algorithm to pair you with the perfect wine. Considering the sheer volume of the daily decisions we as humans have to make, sometimes it’s best to leave the data analysis and research up to someone, or something, else.

In Thinking Fast and Slow, Daniel Kahneman talks about our two systems of thought:

System I requires little to no effort, it responds automatically and quickly. Examples include computing 1+1, understanding simple sentences, and shifting attention to a sudden sound.

System II is reserved for activities that demand more attention and often require complex computations. Examples include filling out tax forms, comparing two items to determine the best deal, and searching for somebody in a crowd.

Most of what we think and do takes place in System I, as System II is reserved for when things get difficult. This is where algorithms can help; it doesn’t make sense to use an algorithm for simple tasks that require very little effort. But the question remains: when should an algorithm be used vs human analysis?

An algorithm is defined as a set of rules to be followed when solving problems. Algorithms take in data and perform calculations to find an answer. The calculations can be simple or more complex, and should deliver the correct answer in the most efficient manner. What good is an algorithm if it takes longer than a human would to analyze the data or provides incorrect information?

The rise of big data and a desire to measure everything has led to an environment perfectly suited for the use of algorithms. The data we collect by itself is meaningless, we need to drive action and insights from the data. According to Peter Sondergaard of Gartner, “Data is inherently dumb**—**algorithms are where the real value lies. Algorithms define action.” Knowing when to use an algorithm and when to rely on humans is a critical piece.

Algorithms should be used when they are beneficial to both the organization and the individual. Using algorithms to perform basic tasks that our System I thinking can handle instantly doesn’t make sense; using them to augment activities reserved for System II provides value to both the individual and the organization as the number of steps to perform an action are reduced. Reducing the time required to find an answer and take a subsequent action is a win/win situation.

When analyzing data there are times when an algorithm can be useful and times when it would be best to rely on human insight and analysis. Algorithms can’t be trusted to always make the right decision, sometimes human analysis is needed to confirm the results and see things a computer doesn’t. Do you remember the $23 million book about flies being sold on Amazon? Two competing sellers were both using an algorithm to automatically set the price of the book based on the price of the other. This set off an ever-escalating price increase until somebody noticed and the price returned to normal.

Identifying where algorithms are useful and where analysis is better left to humans is critical as more organizations look to automate tasks. Organizations look to web performance data to make decisions on a daily basis. In some instances, algorithms can help, sometimes they can be flawed, and sometimes you need human insight to make a decision. Below are three scenarios highlighting the good, the not so good, and the not ideal.

The good: Breaking data down into meaningful categories can take hours, if not days, when all the information from disparate systems is analyzed. The value isn’t in categorizing the performance and event data but in the insights that can be gleaned from the information. Automatically identifying outliers and anomalies, and grouping data can reduce the time needed for humans to analyze the information.

The not so good: Nobody likes to receive an alert that something is broken. What’s worse is when you investigate an alert only to find out there really isn’t a problem. Using the wrong alert logic, or not having reliable data to provide sufficient context for the alerts can result in hours of lost time. It doesn’t take long for alert fatigue to set in due to chasing too many false negatives and false positives.

The not ideal: When building a new feature it is important to consider how the user experience may be impacted from both a performance and a functionality perspective. Data can be used to determine if the feature will improve or hurt page load times, but just because it improves page load times doesn’t mean it is good for the business. MSN shared a number of performance case studies on the affect various changes had on performance metrics. Delaying the load of ads helped performance metrics and increased the number of page views. The downside was this drastically hurt the ad business; a net loss is bad for business, so the ads were not delayed. Looking at multiple factors and examining the gray areas required humans to look at the bigger picture.

Algorithms are only as good as the data they receive–if the data being used is flawed, then the insights and information extracted will be flawed. Humans and computers can both make mistakes. According to a recent article by Bloomberg, “algorithms can be as flawed as the humans they replace –and the more data they use, the more opportunities arise for those flaws to emerge.” Decisions need to be based on clean and meaningful data that is analyzed by both algorithms and humans.

The more insights obtained from the data, the greater the opportunity for meeting business goals. Using algorithms to automate certain tasks makes sense, but not all tasks can be automated. Advancements that algorithms have provided can be amazing and result in tremendous value for businesses and individuals, but human oversight is necessary to ensure that algorithms don’t run amok.

Synthetic Monitoring
Cloud Migration
SLA Management
This is some text inside of a div block.

You might also like

Blog post

Mastering IPM: Key Takeaways from our Best Practices Series

Blog post

Mastering IPM: Protecting Revenue through SLA Monitoring

Blog post

Adobe Experience Cloud Outage: The Impact of Relying on Third-party Services