AdWords Campaign Experiments: Details and Pitfalls
Back in September, I wrote a brief overview of AdWords Campaign Experiments (ACE), and Joe Kerschbaum wrote an excellent post on ways to use the feature.
ACE is a great enhancement that can really take your PPC campaigns to the next level. That said, there are a few important details you need to know before using it, as well as some pitfalls to watch out for. Let's look at an example to illustrate.
ACE Details
One of the highly touted ways to use Experiments is to try new keywords, and with good reason. Maybe there's a broad match keyword you're not sure will generate the results you want, or maybe there's a keyword that is marginally relevant to your campaign, but you'd still like to test it out.
Pre-Experiments, your only option was to add the keyword and watch it like a hawk while hoping for the best. Now, you can use Experiments to display the keyword on as little as 5 percent of your total traffic. Great news, right?
Yes and no. First of all, using Experiments requires at least a rudimentary understanding of testing principles. If you haven't conducted controlled tests before, even the terminology in the Experiments documentation will be confusing.
It's also a little tricky getting your head around the "control," "experiment," and "control + experiment" categories. Basically, the "experiment" will only run on the percent of traffic you specify.
So if you only want your test keyword to run 10 percent of the time, you'll need to set it to "experiment only." Conversely, if you don't want a particular keyword or set of keywords included in the experiment, you'll need to set it to "control only."
Remember, Experiments run at the campaign level, so if you're only testing keywords within one ad group, be sure to set the other non-experiment ad groups to "control," as well.
It gets confusing, to be sure.
ACE Pitfalls
One big shortfall of Experiments is that it can't be applied to campaign settings. For example, it would be great to test dayparting, networks, devices, daily budgets, and other campaign-level settings, and Experiments would be ideal for this. Unfortunately, it isn't an option at this time. I'm hoping it will be, eventually.
There are also some pretty big pitfalls to watch out for when using Experiments. The biggest one I've discovered has to do with launching changes fully.
So, you've run your experiment, and it works great and you get great results. The logical next step would be to roll out the changes to all traffic, right? Yes, but rolling it out involves more than just clicking that button in your campaign settings.
I ran into this unfortunate pitfall in a client campaign recently. I ran an experiment on match types: I wanted to see if modified broad match performed better than regular broad match. But I wasn't testing all the keywords in the ad group -- only some of them.
I added a few modified broad match keywords and set them to "experiment only." I set the broad match variations of these terms to "control plus experiment." I set all the remaining keywords to "control only."
The test worked great: the modified broad match terms, as expected, generated a better cost per conversion with an acceptable loss of traffic (traffic did go down, but not enough to justify the higher cost per conversion). So I clicked "Apply: Launch Changes Fully."
The next day, when I checked on the campaign, traffic and spend had fallen through the floor. It didn't make sense: the experiment showed a nearly negligible difference in traffic, but the rolled-out results were more like an 80 percent decrease in traffic.
When I dug into the campaign, I realized that most of my keywords were paused! What happened? Turns out, all the keywords that were set to "control only" were shut off when I launched the experiment fully.
After thinking it through, it made sense. In hindsight, I should have set all the keywords to "control plus experiment" instead of "control only," and then the test keywords to "experiment only."
Once I realized the issue, I was able to fix it quickly, and luckily the client lost less than a day of traffic. But the result was unexpected and really caught me off guard.
With these caveats in mind, make use of Campaign Experiments, and you'll reap the rewards without stumbling!
This post originally appeared on Search Engine Watch on February 22, 2011.
ACE is a great enhancement that can really take your PPC campaigns to the next level. That said, there are a few important details you need to know before using it, as well as some pitfalls to watch out for. Let's look at an example to illustrate.
ACE Details
One of the highly touted ways to use Experiments is to try new keywords, and with good reason. Maybe there's a broad match keyword you're not sure will generate the results you want, or maybe there's a keyword that is marginally relevant to your campaign, but you'd still like to test it out.
Pre-Experiments, your only option was to add the keyword and watch it like a hawk while hoping for the best. Now, you can use Experiments to display the keyword on as little as 5 percent of your total traffic. Great news, right?
Yes and no. First of all, using Experiments requires at least a rudimentary understanding of testing principles. If you haven't conducted controlled tests before, even the terminology in the Experiments documentation will be confusing.
It's also a little tricky getting your head around the "control," "experiment," and "control + experiment" categories. Basically, the "experiment" will only run on the percent of traffic you specify.
So if you only want your test keyword to run 10 percent of the time, you'll need to set it to "experiment only." Conversely, if you don't want a particular keyword or set of keywords included in the experiment, you'll need to set it to "control only."
Remember, Experiments run at the campaign level, so if you're only testing keywords within one ad group, be sure to set the other non-experiment ad groups to "control," as well.
It gets confusing, to be sure.
ACE Pitfalls
One big shortfall of Experiments is that it can't be applied to campaign settings. For example, it would be great to test dayparting, networks, devices, daily budgets, and other campaign-level settings, and Experiments would be ideal for this. Unfortunately, it isn't an option at this time. I'm hoping it will be, eventually.
There are also some pretty big pitfalls to watch out for when using Experiments. The biggest one I've discovered has to do with launching changes fully.
So, you've run your experiment, and it works great and you get great results. The logical next step would be to roll out the changes to all traffic, right? Yes, but rolling it out involves more than just clicking that button in your campaign settings.
I ran into this unfortunate pitfall in a client campaign recently. I ran an experiment on match types: I wanted to see if modified broad match performed better than regular broad match. But I wasn't testing all the keywords in the ad group -- only some of them.
I added a few modified broad match keywords and set them to "experiment only." I set the broad match variations of these terms to "control plus experiment." I set all the remaining keywords to "control only."
The test worked great: the modified broad match terms, as expected, generated a better cost per conversion with an acceptable loss of traffic (traffic did go down, but not enough to justify the higher cost per conversion). So I clicked "Apply: Launch Changes Fully."
The next day, when I checked on the campaign, traffic and spend had fallen through the floor. It didn't make sense: the experiment showed a nearly negligible difference in traffic, but the rolled-out results were more like an 80 percent decrease in traffic.
When I dug into the campaign, I realized that most of my keywords were paused! What happened? Turns out, all the keywords that were set to "control only" were shut off when I launched the experiment fully.
After thinking it through, it made sense. In hindsight, I should have set all the keywords to "control plus experiment" instead of "control only," and then the test keywords to "experiment only."
Once I realized the issue, I was able to fix it quickly, and luckily the client lost less than a day of traffic. But the result was unexpected and really caught me off guard.
With these caveats in mind, make use of Campaign Experiments, and you'll reap the rewards without stumbling!
This post originally appeared on Search Engine Watch on February 22, 2011.
Labels: adwords, pay per click strategy
1 Comments:
I realized that recently with an adcopy test I did. They need away for you to approve or disaaprove each change, instead of accepting all.
They also need away for you to change the setting from say 70/30 to 60/50, once you see your experiment is not killing your results.
I am also still rooting for alerts,
By Unknown, at 10:19 AM EDT
Post a Comment
<< Home