I’ve been asked continuously over the last twenty years “what’s a good click through rate, bounce rate, time on site, etc.” and more recently, “what’s a good IVT (invalid traffic) rate?” My answer has always been that industry wide digital advertising benchmarks are meaningless because they are not actionable, even though it is very tempting to compare yourself to them. Let me explain why.
Comparing Apples to Oranges
Let’s take simple things like bounce rates, time on site, and number of pages per session; all of these are readily available in Google Analytics. A “good” bounce rate, time on site, and number of pages per session is entirely different depending on the nature of the site or the industry vertical to which it belongs. For example, for a medical journal site, doctors come to read one article to get answers they need at that moment; and then they leave. The number of pages for that session may be one page and the time on site relatively low, because they found their answer and left. For a travel booking site, the user may do a search for hotels and then click to check out a few of the search results. Their time on site might be much higher, and the number of pages also much higher. The user was doing research and trying to book a hotel; so they looked at dozens of pages. The benchmarks for a medical journal site are entirely different from those of a travel booking site. Any comparisons would be apples to oranges. And this is why industry wide benchmarks are meaningless and not applicable to your specific situation.
Use Your Own Organic Traffic As Benchmark
Instead of comparing your analytics to industry wide benchmarks, look at their own organic traffic. I recommend you use those characteristics as the benchmarks for judging the efficacy of your paid digital marketing campaigns. Users who searched for something on Google and then clicked through on an organic search result probably wanted to be there. So the characteristics of time on site, bounce rate, and pages per session would serve as good benchmarks for what real, human visitors look like and do on your site. Do your paid marketing campaigns over-index, match, or under-index these benchmarks? If the paid marketing campaigns cannot deliver users that are at least comparable or better than your own organic traffic, then it may be wise to spend less on those campaigns. But if your paid campaigns deliver users to your site that bounce less, stay longer, and look at more pages, compared to your own organic benchmarks, then add budget to those campaigns so you will get more of the right kind of user to your site.
What’s A Good Bot Benchmark? Zero, If Possible
When it comes to IVT and fraud, a good benchmark is zero fraud and bots. Of course, that’s not always possible. So it is important to measure for bots and take steps to reduce the invalid traffic. You can only do this if you knew how much of your traffic was bots and which bots they were. Google Analytics does not tell you this. FouAnalytics does. Bots are designated as follows: 1) dark red means malicious bots, 2) orange means declared bots – ones that say their name honestly in the user agent/browser name, and 3) yellow means search engine bots and crawlers.
When we look at billions of page views over long periods of time (the chart below), and we see consistency like in the following #FouAnalytics chart, we can use this as a benchmark for what good publishers look like. There is a lot of dark blue — which means humans — and very little dark red (bots). There is some yellow, but that is desirable because publishers need search engine crawlers to index their sites. Note the amount of yellow remains low, over time.
Good publishers don’t have rampant amounts of bots on their sites. This is because they have large human audiences, they don’t buy traffic, and fraud bots won’t waste their time causing pages to load on these sites because they don’t get paid. Fraud bots will load pages on long tail sites that pay them for traffic. The example above is just one good, mainstream publisher. Dozens and dozens more look similar to this. Note that we measure for humans (dark blue) in addition to measuring for bots. Other current fraud verification tech platforms only measure for IVT and give you a number. Those are useless because they are not actionable. You can’t do anything with just a percentage IVT number as many site owners have told me.
On-Site Measurement vs In-Ad Measurement
Another shortcoming of existing IVT detection vendors is that they don’t differentiate between on-site measurement (code on the page) versus in-ad measurement (code in the foreign ad iframe). Without going into any technical details here, have a look at the following chart from 2016 when we did one of the first industry wide studies. Note how different the ad networks colors and percentages look, compared to publishers. The ad networks were measured with an in-ad tag. In other words, ads were tagged and displayed across thousands of sites in those ad networks. In contrast, publishers were tagged with on-site tags, where the code was installed on pages just like Google Analytics.
You Must Measure for Humans, In Addition to Bots / IVT
Measuring for humans (dark blue) is important, as it is clearly visible that some ad networks had more humans than others. Some had none. Measuring only for bots might give you a number like 10% IVT. Most people assume the other 90% is human. But as you can see from the chart above, there could be many other things, other than humans – like not measurable, unknown, etc. So if you don’t positively measure for humans, in addition to detecting bots, you’re “missing out,” as it were.
One other important note here is that not all of the fraud is measured or measurable. Hackers are very good at covering their tracks and disguising their bots. And good guys may not even know what to look for. So there remains various forms of fraud that are simply not detected or accounted for. So you should always assume there is something not yet detected and therefore keep looking for anything that seems off or strange in your analytics.
Click-Through Rate Benchmarks For Display Ads
Finally, we get to what everyone’s been waiting for – click through rates. See the chart below. It is a Google benchmark, which they no longer publish. But note the range of click through rates over time. All of it is in the 0.1% range for display ads. While there is no way to know for sure if Google scrubbed bots from these benchmarks, 0.1% is approximately the right order of magnitude estimate for humans clicking on banner ads voluntarily. (When was the last time you remember voluntarily clicking on a display ad?).
I realize that many of you are probably given campaign performance reports that show CTRs in the 5 – 10% range these days. The question would be whether those vendors properly subtracted out bot clicks. As I have written elsewhere, bots are very good at clicking; and they can tune the CTRs higher so you think you are getting better engagement from ads running on fake sites. You are thus tricked into shifting more dollars to fraudulent long tail sites selling ad impressions through programmatic ad exchanges.
I also realize it is very tempting to want to believe that the much higher CTRs you are seeing are due to the super-duper-awesome hyper and behavioral targeting that your adtech vendors sold you on. Did they tell you about the bots that pretend to be oncologists to earn higher retargeting CPMs? Did they tell you that bots pretend to be “back-to-school intenders” by deliberately looking at textbooks and backpacks on Amazon? And did they tell you that bots tune the CTRs to the 5 – 10% range so they can get you to shift more money to fraudulent sites; the bot makers know that much higher click through rates might be too obviously fraud and too easy to spot.
Pro Tip: Be sure to ask your viewability vendor if they subtract out bot traffic before they report “viewability” to you; if they didn’t or don’t even know about this, how accurate do you think their measurements are?
CMOs and marketers, what do you think now? Were you comparing your campaign benchmarks to the right things? Do you still think your super-duper high click through rates were from real humans or just bots? And did you accidentally shift more ad spend to long tail sites in programmatic channels because you got tricked by their higher CTRs and lower IVT? Have a closer look, and let me know.