Searching Beyond the Paid

Friday, July 01, 2011

PPC Ad Testing: "Can" vs. "Should"

I love PPC for a number of reasons. It’s fast and effective, it doesn’t require a ton of money to use, and there are few other marketing channels that make ad testing so easy and effective.

But as much as I love PPC ad testing, I feel the need for caution. Like a kid in a candy store, PPC managers often have trouble choosing what to test. Just because you CAN test 20 things at once doesn’t mean you SHOULD.

One of the best things about PPC is the fact that you can run an almost unlimited number of tests. You can test 10 different ad variations if you want to. If you’re using display, you can even test text ads vs. image ads. Within the image ads bucket, you can test rich media, animation, several different sizes….. You get the picture.

I’ve found that new PPC advertisers (and clients) see this buffet of choices and try to pile one of each on their small appetizer plate. They set up a multitude of ad tests right away, before they’ve launched a single campaign or gathered one iota of data. The thinking is, “we’re not sure what will work, so let’s try them all!”

To an extent, they’re right. With a brand new advertiser and a brand new campaign, you really don’t know what will work. Because PPC generates a lot of data so quickly, in some ways it’s logical to test every option so you can get that data as fast as you can.

But let’s think about this for a second. Way back in business school, we learned about a little thing called opportunity cost. Opportunity cost takes into account a few key factors.

Time.

The mere task of setting up 10 or more ad tests can be daunting, even to an experienced PPC manager. What copy will we use in each variation? What exactly are we testing? Even with a tool like Adwords Editor, it takes time to set up each variation – and it’s compounded by the number of ad groups you’re working with.

There’s also the issue of the amount of time it takes to get results. For simplicity’s sake, let’s say you need 100 clicks on each ad for statistical significance, and you estimate it’ll take a week to get that many clicks in each cell. If you’re running 2 ads, that’s 200 clicks – but if you’re running 10 ads, that’s 1,000 clicks. You’ve just turned a 1-week test into a 5-week test. And in the meantime, you may be running an ad that’s losing money – which you won’t find out for 5 weeks. Yikes.

Margin of error.

The more information you’re working with, whether it be text or data, the higher the chance of making a mistake. How many of us have copied and pasted the wrong ad copy into an ad group? How many of us have made a crucial typo in ad copy? (Side story: Years ago when I worked in newspaper classifieds, we ran a real estate display ad for an open house at a $300,000 home, complete with a lovely photo of the sprawling manse – and put $30,000 in the ad copy. Needless to say, the open house was mobbed with unqualified buyers. We got a pleasant call from the REALTOR the next morning that I can’t repeat here. And no, this wasn’t my error. But it sure was memorable.)

The margin for error is even greater when it comes to analyzing test data. Running statistical significance on all the permutations in a huge multivariate test is not a ton of fun – and a math mistake can cost thousands of dollars.

Money.

By now it’s pretty clear how a complicated test can cost you actual dollars and cents. If a test takes too long, you might be paying for losing ads for weeks or even months. And if you make a mistake in ad copy or in analysis, that’ll cost you, too.

The moral of the story is, just because you CAN test all kinds of fancy things in PPC doesn’t mean you SHOULD test them. Using a systematic approach is much better (and easier) in the long run.

Labels: ,

2 Comments:

  • Great advice Melissa - if you're testing too many different variants it's also hard to establish which part made all the difference!

    Have you got any tips on how to choose what to test first? Say you inherit an account that's seen better days and the quality scores are abysmal account-wide. Would you look at ad copy, bids, ad groups - or something entirely different - first?

    By Blogger Katie Saxon, at 9:57 AM EDT  

  • Hi! If QS is low on an inherited account, I usually start with a total ad group restructure, with small, tight ad groups and highly relevant ad copy. I'll also start culling very low QS terms that aren't relevant to the offering. That usually helps.

    By Blogger Melissa, at 10:04 AM EDT  

Post a Comment

<< Home