« December 2008 | Main | February 2009 »

January 30, 2009

Internet Explorer 8 RC1: Porn mode gets a face-lift

ie8logo Now if that headline doesn’t get me some search ranking juice, nothing will - though my contextual ads (left) are likely to be less impressed.

I was going to post this earlier in the week, but Eric Peterson’s swashbuckling defense of cookies (and my hand-wringing response)  intervened. As it turns out, though, that debate is very relevant to this post, which concerns the latest build of Internet Explorer 8 (still used by around 80% of the world’s web users, though not by you lot, who seem to favor Firefox by a nose), which hit the web this week.

I’ve already posted once about IE8, and talked about its new “InPrivate” features (also known as “porn mode”) that allow you to surf the web without leaving a trace (on the machine you’re using, of course – the websites you visit can still track your behavior). It’s worthy of another post because the specific feature that piqued my interest the last time – InPrivate Blocking – has a new name and somewhat different behavior now.

The new name for InPrivate Blocking is InPrivate Filtering, which is certainly a better name. You may recall that InPrivate Blocking was a feature that allowed the user to tell IE to block requests to third-party websites, either manually, or if content from those sites had been served in a third-party context more than 10 times. Examples of this kind of content? Web analytics tracking tag code; ads; widgets; embedded YouTube videos. The idea is to enable users to opt out of this kind of content because it enables third parties to track user behavior (with or without cookies) without them really knowing.

So what’s new in RC1, apart from a friendlier name? Well, a couple of things. The first is that InPrivate Filtering can be turned on even if you’re not browsing in “InPrivate mode”, via the Safety menu, or a handy little icon in the status bar:

image

Click it, and InPrivate Filtering is on. There’s no way to turn this on by default; you have to click the icon every time you start a new IE instance.

The other major change is that there’s more control over how third-party content is blocked. In the previous beta, content was automatically blocked if it turned up more than 10 times (i.e. on 10 different sites) as third-party content. That number is now tunable, to anywhere between 3 and 30:

image

The idea of InPrivate Filtering Subscriptions still exists – a user can import an appropriately formatted XML file (or click on a link on a site, such as this one) to subscribe to a list of blocked third-party content.I’ve not seen any public subscriptions pop up, however, in the time since IE8 beta 2 came out.

In my previous post in IE8, I wrote about how, as someone whose job depends on being able to track users, I am conflicted about this functionality. This revision makes it slightly easier for privacy hawks to block third-party content, and whilst I welcome it, my original prediction – that it will be relatively lightly used in practice – still stands.

Interestingly, since IE8 beta 2 was announced in August, other browser manufacturers have followed suit – most notably, Mozilla, which will be including InPrivate-style functionality in Firefox 3.1 – though without the third-party content blocking feature. Apple’s Safari browser has had similar functionality for some time.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

January 28, 2009

Eric Peterson Rides Again

Looks a little like him, don't you think? Eric Peterson has an impassioned post on his blog in which he defends the Obama Administration’s decision to use persistent cookies for tracking behavior on the Whitehouse.gov site. He directs particular ire at an article by Chris Soghoian at CNET from November which questioned whether it was a smart move for the (then) Obama Transition Team to be using embedded YouTube videos for streaming Obama’s weekly addresses on the Change.gov site.

Eric’s post is a follow-up to his post from November in which he called upon Barack Obama to relax the burdensome rules around the use of persistent cookies on Government websites. And let me say this: those rules suck. They ban the use of persistent cookies altogether, both first- and third-party. And I stand firmly behind Eric’s stance that those rules should be re-written – Government can’t be effective in providing services online if it can’t track the usage of those services.

But in his enthusiasm, Eric does actually conflate two somewhat separate issues - cookies on the one hand, and third-party content & tracking on the other. And third-party tracking & content deserves at least as much attention as cookies (if not more, in fact).

Obama's team's decision to use YouTube to stream videos and WebTrends for web analytics means that behavior data is being sent to a third party. The Whitehouse.gov site does a pretty good job of explaining about its use of cookies, but a less good job of detailing what data is being sent to third parties, who those third parties are, and how to prevent that information being shared.

Whilst it's no skin off my nose to send this data to Webtrends and Google, this is partly because a) I know and trust those organizations, and b) the content on the Whitehouse.gov site is pretty uncontentious. But what if I were looking at detailed information about entitlement programs, or applying online for some Government help with my mortgage? There is at least a valid question to be asked about how this kind of behavior data is shared with third-parties, separate from the cookie discussion.

My view? I don’t really think Government websites should be sending tracking data to third-parties, or retrieving content from third-party sites (other than other Government sites). There are plenty of options for first-party analytics solutions which offer just as much functionality as hosted solutions and would allow the Government to maintain control of this data and to be able to be definitive about how it is stored and used.

Stop wasting their time!

Eric also makes the point that, with everything else that’s going on right now, it’s borderline irresponsible to be chewing up the new administration’s time with pedantic questions about cookies or third-party tracking. But I don't think it's inappropriate at this stage to flag this to the Obama administration, because I imagine that at this moment (or very shortly) a variety of Federal agencies are looking at how they can put more information and services online.

Helping the administration to set sensible policies now will stop precious money being wasted if policies have to be changed later. And besides, wasn’t it Eric who called on Obama’s team to take the time to review the rules in the first place? Could they not churn out some websites with some simple log-based tracking now and then focus on E-government policy when the economy’s calmed down?

ObamaTube.com

Another issue addressed in Chris’s original post is the wisdom of using YouTube (or indeed any third-party streaming service) for the videos on the Change.gov site (YouTube is also used on Whitehouse.gov). This raises a number of questions, such as how was Google chosen over, say, Vimeo, or Hulu, or MSN Video, and whether there any SLAs in place to ensure this material is available on an ongoing basis.

Let me make it clear that I don't object to Obama's addresses being available on YouTube - they should be there, and on every other video streaming website. But for information published through the Whitehouse.gov website itself, I'm not sure that a third-party streaming site is the best choice. How confident can we be about the integrity of this information? After all, we wouldn't want Obama to be RickRolled, now would we?

Yeah, yeah, grumble, grumble… you done yet?

You’re probably thinking “Jeez, what a kill-joy” as you read this post. And it’s true that privacy wonks (which I would not fully consider myself to be) do have a rather Cassandra-ish quality, always looking for the bad. But this is an essential part of the dynamics of the debate on topics like this – which means that Eric’s robust post is also essential and welcome, I should add. But we did get into rather hot water with the previous administration’s disregard for privacy. So it only makes sense that the new guys should get to hear these concerns now.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

January 21, 2009

Omniture stumbles

stumble Chatter is building on the interwebs about Omniture’s recent (and ongoing) latency woes. Looks like both SiteCatalyst and Discover are days behind in processing data (according to messages on Twitter, up to around 5 – 7 days in some cases). And it looks like the situation is still getting worse, rather than better.

I have no insight into the cause of Omniture’s difficulties, or how widespread they are. It may be that they’re related to the December release of SiteCatalyst 14.3, which seems to contain a number of new features which are fairly broad in scope, and which may have had an impact on the platform’s ETL stability. Behind the scenes, Omniture may have made some changes to start integrating HBX’s feature set (especially its Active Segmentation) into SiteCatalyst as a prelude to a final migration push for the remaining HBX customers. Omniture’s certainly not saying – they’ve been conspicuously silent since the start of these problems.

Whatever the cause, I can certainly empathize with this kind of situation – we had all sorts of difficulty dealing with latency issues in my WebAbacus days. And we can be confident that Omniture will (eventually) fix these problems, and will probably not lose very many customers as a result (though, in the teeth of a recession, it can’t be great for attracting new customers).

But do these problems tells us something more about Omniture’s (or any other web analytics company’s) ability to run a viable business? Infrastructure costs are a big part of a web analytics firm’s cost base (at least, those with a hosted offering, which is all of them). And unfortunately, these costs don’t really scale linearly with the charging method that most Enterprise vendors use – charging by page views captured. Factors like the amount a tool is used, and the complexity of the reports that are being called upon, have a big impact on the load placed on a web analytics system, and the resulting infrastructure cost. It’s tricky for a vendor to recoup this cost without seeming avaricious.

As Omniture’s business grows, it has a constant need to invest in its infrastructure to keep pace with the demand for its services. But as the economy has worsened, it must be terribly tempting to see if a little more juice can be squeezed out of the existing kit, especially with its 2008 earnings due later this month. This will be as true for any other vendor (such as Webtrends or Coremetrics) as it is for Omniture, and these remarks shouldn’t be seen as a pop at our friends in Orem. But the nub is, can Enterprise web analytics pay the bills for its own infrastructure cost? Or will all web analytics ultimately need to be subsidized by something else (such as, oh, I don’t know, advertising)?

Your thoughts, please.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

January 14, 2009

PubMatic kicks us when we’re down (but gently)

image As if we weren’t all feeling gloomy enough already, PubMatic has just released its Q4 2008 Ad Price Index report, which makes for sobering reading. For those of you not familiar with PubMatic, they provide “multi-network optimization” for publishers who are looking to maximize the yield on their remnant ad inventory (i.e. the inventory the publisher can’t sell themselves).

Rather than manually dealing with a handful of networks directly, the publisher hands their inventory over to PubMatic who ensures that the most profitable ad is shown, whether it comes from Google Adsense, BlueLithium, AdBrite, or another network. Since a key part of what PubMatic does is measure the CPMs for online ads, they have access to lots of ad price data, and every quarter they roll this data up into a report (available here as a PDF).

PubMatic has been doing this for 15 months now, and so far, they’ve yet to deliver any good news:

image

Given the economic Armageddon that overtook the world, PubMatic’s report that average prices only softened by $0.01 during Q4 actually seems like pretty good news. But then again, Q4 was the holiday season; and compared to Q4 2007, Q4 2008’s numbers look pretty horrendous.

The detail of the report contains some more interesting tidbits: for example, the average CPMs for small sites (fewer than 1 million PV/mo) are the highest, at around $0.60, whilst the average CPMs for large sites languish at around the $0.17 mark.

Before you start predicting the doom of the mainstream media, however, it should be pointed out (as Mike Nolet has done) that there is a sample bias in the PubMatic numbers – whereas a small publisher is wholly dependent on ad networks for all their revenue (lacking the resources to sell their inventory themselves), and so is likely sending all its inventory (including the juiciest stuff on the home page) to PubMatic, a large site will only be sending the inventory that they couldn’t sell themselves – i.e. the bottom-of-the-barrel stuff.

It also turns out that average prices for the largest and smallest publishers have slumped by around 50% in the past year, whilst prices for medium-sized sites have remained more solid:

image

I’m at something of a loss to explain why this might be – at the high end, it may be because large sites are becoming more efficient at selling their inventory themselves, so it’s only the really cheap stuff that is being passed on to PubMatic; whilst at the bottom end, small publishers are becoming increasingly crowded out by new sites.

What would be immensely useful would be for PubMatic to provide some kind of indication of the proportion of inventory from sites that is being served through them; this would make it easier to understand if changes in average prices through PubMatic are the result of a change in the mix of inventory that is being passed to the company. However, I would be very surprised if PubMatic had access to this kind of data.

One more thing…

image Once you’re done reading the Ad Price Report, stick around on the PubMatic site a little longer and download their White Paper entitled “Death to the Ad Network Daisy Chain”. This little document does a nice job of explaining how an impression is passed from one ad network to another, and highlights the surprisingly high proportion of ad calls that are returned ‘unsold’ by networks. The document then goes on to talk about how ad operations folk have to manually set up ‘daisy chains’ of ad networks to try to ensure that the maximum amount of inventory is sold. As the title of the document implies, this is held to be a bad thing.

Because of the nature of the business that PubMatic is in, the recommendation in the document is that publishers use ‘dynamic daisy-chaining’ (which is essentially what PubMatic does, choosing the order of daisy-chaining based on expectations about which network will be likely to monetize an impression most effectively) to solve this problem. At one point the document states (my emphasis):

Due to the volatility of online ad pricing … creating a dynamic “chain” of ad networks, rather than a static one, is the only way for a publisher to ensure that they can get the best price possible for their ad space.

I would respectfully disagree with this statement; another way of achieving this is to use an ad network that is a member of an ad exchange, and which can therefore draw on a larger pool of advertisers than just those with whom it has a direct relationship.

But I don’t disagree with the main sentiment of the PubMatic paper, which is that publishers still struggle with significant inefficiencies in the way they monetize inventory; and I believe we’ll see the kind of multi-network optimization solution that PubMatic offers (also available from Rubicon and AdMeld) become increasingly important as the year wears on.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

About

About me

Disclaimer

Subscribe

Enter your email address:

Delivered by FeedBurner

Subscribe