The mysteries of measuring marketing response, part 1: Delivery system-based counting

Reading Time: 4 minutes

It goes without saying that measuring the effectiveness of online marketing is web analytics’ #1 ‘killer app’. But how realistic a picture of the value of online marketing can a web analytics deliver? Come to that, is there any such thing as a true picture of marketing effectiveness?

The short answer to the above question is no. Depending on the measurement system you use, and the counting/reconciliation methodology you use, you can get pretty much any picture of marketing response that you want – and plenty you don’t. Today’s post is the first of a series which will combine to provide a short(ish) field guide to the more common counting methodologies you’ll find. Ask your vendor which one they use, and why.

 

Delivery system-based counting

The simplest way to measure the impact of your online marketing is to let the system that’s delivering the marketing do it for you. Examples of such systems include Google Adwords for paid search, Atlas for online ad-serving, or Constant Contact for e-mail.

Technically, this solution usually involves a ‘click redirect’; when the user clicks on a banner ad, or a paid search link, or a link in an e-mail, their browser is actually directed to a long and complicated URL on a redirection server, which automatically redirects them to the actual destination URL, but not before making a note of the fact (i.e. recording the click).

Since they’re also delivering the marketing (i.e. showing the ads, or sending the e-mails), these systems can also report on how many times the ad was shown or the e-mail sent, and also the reach of the marketing, i.e. how many people saw it in a given time period. They can also report on how much it cost; indeed, these measurement systems are used in the billing systems of pay-per-click networks like Google Adwords.

A key enhancement to this method of counting is to capture ‘events’ (usually a specific page being requested) on the ‘destination’ website (i.e. your website) and correlate these back to the original marketing. The method used here is to place tag code (sometimes known as a ‘spotlight’ tag, a term coined by DoubleClick) on key pages on the destination site which send information about the fact that (for example) a purchase was made back to the marketing system. In an advanced version of this, the value of purchases can be sent back.

The ‘conversion’ event is linked back to the original ad delivery/click by means of a third-party cookie, and correlated over some kind of time window, such as 30 days (i.e. if a conversion event occurs within 30 days of a click from the same user, that conversion is allocated to the bit of marketing that drove the click).

So a full implementation of this kind of counting system could yield the following information in a report:

  Impressions Clicks Cost Purchases (#) Purchases ($) ROI (%)¹
Paid Search 1,000,000 10,000 $10,000 200 $40,000 400%

¹ This ROI figure doesn’t take into account the cost of the good sold, so isn’t a true ROI, but is the closes that most such systems get.

Limitations/shortcomings

The main limitation of this method of counting springs from the same source as its strength: it is delivery system-centric. So if so if you’re using, say, three different kinds of marketing (as in the example above), you’ll get three different sets of reports on how it’s working, which you’ll have to compare yourself to get a picture of what marketing is working best (easier said than done).

This task is made even harder by the fact that each system wants to claim as many of your site’s conversions as being caused by their marketing as they can. This leads to multiple systems claiming credit for the same conversion.

To understand how this happens, consider the following example: a user clicks on a paid search ad, and goes to a site, where they sign up for a newsletter. Two weeks later, they receive the newsletter, click on one of the links, and spend $1,000 on the site. Because the conversion is within 30 days of the original paid search ad click, the paid search system claims credit for the conversion; but because the conversion also occurred shortly after a click on an e-mail link, the e-mail system claims credit too.

Who to believe? Clearly both elements had some impact on the propensity to convert, but neither individual system is going to admit that, because that would mean giving away some of the value of the conversion, and reporting a lower ROI.

You can’t solve this problem with delivery system reporting – you have to use web analytics on your site itself to solve this. We’ll be exploring this thread in more detail in the next couple of posts in this series.

Another limitation of this counting system is that the number of clicks reported by the delivery system is always higher (usually by about 10%) than the number of inbound arrivals at the destination site. The reason for this is that the Internet is an unreliable place, and so are users’ computers; between clicking the link and arriving at the destination site there are a whole bunch of things that can go wrong, such as the user’s Internet connection going down, or the user (from where they’re sitting on the Internet) just not being able to see the destination site. So the delivery system measures the click, but the redirection never winds up sending the user to the destination site. So yes, if you’re paying per click for ads, you’re overpaying by about 10%. But so is everyone else, so get over it.\

Finally, this kind of system is vulnerable to the vicissitudes of third-party cookies, which are hardly the most popular kid on the block these days. If the users flushes his or her cookies between their original ad click and when they actually convert, their conversion cannot be correlated back to their click.

Whether to trust the data

The net of this is that you can trust the delivery (impressions) and click information in a report from your marketing delivery system vendor, but you should take the rest with a healthy pinch of salt. Conversion counts in particular will be over-estimated; you should probably discount these figures by around 20%, though this figure depends entirely on the mix of marketing that you’re doing (if you are only doing one kind of marketing, the figures will be more accurate).

If you have a web analytics solution deployed against your website, make sure it’s measuring marketing response too (more on this in the next post in this series), and compare the two to get something of a reality check.

4 thoughts on “The mysteries of measuring marketing response, part 1: Delivery system-based counting”

  1. Dear Ian,
    This article brought up something that I have been thinking about.
    I recently was asked a question that seems simple enough, and it is basically the ultimate axiom of internet marketing. Yet there has been very little research done on this. (That I can find). Do you know of any research that has been done? Would you be willing to point me in the right direction or pass this along to someone who can study it?
    How many units of impressions and/or clicks and/or conversions does it take to have enough data to say that the results are statistically significant so conclusions about performance can be drawn?
    There was no source that could answer this question precisely (or even at all). The answer to this question, apparently, is that there is no answer because no one has asked it yet. Some claim that the first impressions are the most important. Others claim that you cannot judge in one week’s time whether an ad should be pulled. It may sound obvious, but if the ultimate goal is simply an excellent initial conversion rate you should pull ads with extremely low conversion rate. If your goal is to create brand awareness and your product depends on that, then you must not judge immediate results and give it some time.
    This is the short answer, but surely there must be a more definitive one. What do you think?
    Thanks for your time,
    Matt Aronowitz

  2. Matt’s question does not have a definitive answer, because so many things may be taken into account, namely, the level of confidence, the strength of the relationship, the number of factors used as predictors, etc).
    But the short technical answer is that with as few as 10+ observations, you may conclude that there is a statistically significant difference. As a rule of thumb, one would suggest a minimum of 30 observations. If the effect is weak (say, 49% prefer design A and 51% prefer design B), thousands of observations may be required in order to ascertain the significance with a high degree of confidence.
    You can find a calculator here: http://www.danielsoper.com/statcalc/calc01.aspx
    My guess is that in webAnalytics sample size is not a problem. At least not in the way usually asked in (social) science, where we are concerned with the question of the cost of a study that will statistically be conclusive on a given question.
    In webAnalytics, data is so abundant that almost any test will yield statistically significant differences that could nonetheless be managerially pointless. If you have a sample of 30 000 clicks, the tiniest difference in behavior is significant. But a significant difference in conversion rates of .01% may not justify the expenses of creating/maintaining alternate designs.

  3. Wow Ian this is such a one of a kind post. I love reading this one. With regards to your question Matt, Stephen explains it a lot. He has a point there…try to check the link he gave. Might help..
    Regards to you…

Comments are closed.