The world of digital display advertising is in a greater state of flux now than ever. Over the past few years we have gone from a digital replication of the traditional media buying approaches to a world of Demand Side Platforms, where media is bought on an individual ad impression basis. The advantage of this paradigm shift for advertisers is that we are now able to target the right user in the right environment at the best rate. But with so much choice, how do we know which impression is the right one?
This past year has seen almost everyone in the industry focusing on third party data – which and whose data is the best and how should it be used? What do we mean by third party data? Display targeting data has typically come from two sources in the past – the media owner or the advertiser themselves (for re-targeting, for example). The third-party data industry has seen a number of companies start to specialise in selling targeting data that can be used across any media an advertiser wants to buy. This separation of media and targeting should ultimately improve campaigns – it enables media owners and targeting companies to focus on their relative core strengths.
So there is a massive potential for data to further revolutionise the way we buy, helping to inform buying algorithms as to which users represent the greatest prospects at any given moment. And yet the issue for agencies is that there are still a lot of questions around the actual value of this data and whether the premium paid for this intelligence is ever going to generate a positive return.
Third party data providers will claim that their data will massively improve the performance of a campaign and reduce impression wastage. But the current cost model – paid for, as media is, on a cost per thousand ads served model (CPM) – creates an inherent friction. If the data improves my targeting and reduces wastage then I can run a smaller, more efficient campaign. In other words if the data is as excellent as is often claimed, then we would only need to purchase small proportions of data that are relevant to a particular campaign and the audience that we want to target. The better the data, the less we actually need.
Up until now, data has almost always been bought and sold on a CPM basis and as a result of the problems with the pricing model I’ve described above, the data industry has struggled to adequately monetize these small well defined bundles of quality data. The result is that the data is often being sold at a significant premium, sometimes at an even higher CPM than the actual media. When the costs are so high it is hard to justify the use of the data – you could simply buy twice as much media for the same budget – when advertisers are only now realising the true value of their own data many are beginning to realise they are better off sticking to using first party data.
In order for the third party data market to survive we will need to move towards a different cost model, perhaps one where data can be purchased on a fixed cost per campaign basis or a percentage of media spend, or even to specific campaign KPIs (CPE, CPC, CPA etc.), removing the potential conflict of quality vs. quantity.
Despite the challenges the allure of third-party data is still strong. Retargeting remains one of the most cost effective online advertising methods and yet many advertisers and agencies are wrestling with just how to reach new users, so they are not solely focused on existing customers. Using third party data for look-a-like modelling, which looks to identify new users that exhibit similar traits to your existing customers, could in theory be almost as refined, segmented and cost effective a form of advertising as retargeting itself.
Through DSPs we can carry out sophisticated and very granular evaluations of both media and data, testing inventory quality and cost as well as the incremental value that third party data adds to the performance of a campaign.
There is definitely a place in digital display advertising for 3rd party data – the key is to refine the measurement models that enable the industry to understand the true value of data and improve the cost models so they adequately reflect that value.