Ok, you add a brand new Ad Group with a few keywords and 2 ads (to Split Test) ... when Google works out the keyword QS do you think it looks at just one of the ads randomly? Both ads as an average? What if they are both completely different with 2 different landing pages? If your Keyword SQ suddenly changes, could the reason be because of a brand new Ad that you have recently added to that Ad Group? How do you track every change you make to the ads, so you know whether or not it's contributed to the QS change? The reason I ask is because one of my keyword QS was poor - I added a new Ad about 2 days ago and today the keyword QS is now OK. Or could it be that the CTR has just improved?
My answers here are simplified for easier understanding. Yes there is a lot more to QS than depicted, but this is how you can explain it via layman's. No it looks at each relationship keyword + text ad 1 = QS keyword + text ad 2 = QS Only the adgroup QS is given an average of the 2 keywords and text ad QS QS 1 + QS 2 / 2 = Adgroup QS This relationship still applies keyword + text ad 1 = QS keyword + text ad 2 = QS QS doesn't suddenly change unless you are being slapped or reviewed. Adding a brand new ad will not cause a dramatic QS change since there is no history behind it Keyword + text ad(with 3 month history) = Heavy weight QS keyword + new text ad (with 1 minute history) = Low weight QS You don't because its no important to monitor QS like a hawk. As long as QS is not poor, just run with it. You created a more relevant text ad and therefor Google rewarded you, you should delete the text ad that had poor attached to it. Always remove the lower performing ad, otherwise you could end up in poor again if the new ad loses QS in any way.
Thanks for the reply Robert. Do you think Ads have to be actually deleted if they were poor? Is simply pausing them not enough? The reason why I pause them is so I can see that I used that Ad in the past and won't try using it again.
I also prefer deleting the poor ad and keep only the best one. Anyway, why keep an ad that I will never use.
I also tend to pause mine for exactly the same reason. With so many ads running it makes sense to see what variations you have tried so that you don't start repeating poor ads. Like Robert, I haven't really tested whether deleting is better than pausing but what I can say is that I do pause and I see no problems with my quality scores. Cheers Mike
So how many ads should I have for each adgroup? The more the better or vice versa? I'm quite new with Adwords. I'm still testing my which ad performs best. So I usually have around 4 or 5 ads for each adgroup and keep it running for a couple of weeks. Am I screwing myself?
It depends how many impressions and ultimately, how many clicks your ads will get. What I do is test more ads (say 8) for adgroups which get lots of impressions - if they don't get many, then stick to something like 2 or 4 ads.
For search network, I highly recommend never running more than 2 at a time, 3 at the very max and there should be massive impressions behind it. If it for content, then 3 ads minimum and no more than 5.
"For search network, I highly recommend never running more than 2 at a time, 3 at the very max and there should be massive impressions behind it" That's fair enough Robert. The reason why I pick more, is more usually when I've just started the Ad Group and I'm testing from scratch. 8 very different ads, then enables me to choose the best and use that as the control. At the start, I don't have a control and using 8 gives me a better ad (usually) than starting with just 2.
I see your logic, but how long does it take to get a large enough sample size? How many impressions and clicks do you wait for before eliminating all 7 other ads?
Experimenting with eight different ads to identify a control is absolutely fine, but not if the ads are run altogether. Even for a campaign with a lot of traffic, it'll take you about a month to identify a respectable control. To identify a control, you should start with two very different ads and split-test very different variations for about a week or two (depending on traffic levels). Once the control has been identified, you should split-test it against very similar ads in an attempt to identify the subtle differences that work best.
Being able to pick a control is contingent on traffic - if you have enough traffic you can test 20 ads at once and pick a control in 48 hours. If you're running an ad group that generates 25,000 - 50,000 clicks a day statistical significance is here before you know it.
I appreciate what you're saying, GuyFromChicago, but I disagree. The more ads that you've got running, the less accurate will be the evenness and fairness with which your ads compete. The even ad rotation feature of the system isn't perfect, and the more ads there are, the more imperfect the feature's performance will be. Even if it was perfect, the fact remains that the time of day during which any particular ad might be displayed will always be different, therefore always being presented to a different demographic. Split-testing just two ads is already ambiguous enough: one ad might do very well for a couple of days, only to do poorly for the rest of the week. Sometimes, certain ads do very well for a couple of weeks, only to be gradually taken over by another ad. If there's one thing that I've learnt from all my PPC experience it's that you can never be certain as to the outcome of results or stats, which will always fluctuate and change. All you can depend on is averages, the accuracy of which increases with time. Keeping that in mind - and just as I said in my above post - it'll take you about a month to identify a respectable control if simultaneously split-testing up to eight or so different ads, even if you have ample traffic.
Without disagreement conversations tend to get boring I have years worth of data (millions and millions of clicks and tens of thousands of ad variations) that proves with the right volume you can pick a control in 48 hours or less. If you have 8 ads that each get 10K+ clicks during that time thats more than enough data to make a call. Where we might be disagreeing is in our definition of "control". To me, in high volume campaigns, a control is simply the ad that you use to build your next tests from. The day after you've identified your control you're testing again already. When you have high volume campaigns you have to compress normal time lines to maximize your effectiveness - when you're pulling thousands of clicks a day you can't afford to sit back and watch for 30 days trying to pick the best ad. Of course this type of rapid testing won't work in smaller volume campaigns. You can't make a call in 48 hours if you're getting 50 clicks a day. If you are, that's called a guess
Masterful and GuyFromChicago both make really good points Its up to you guys to really determine the right way. What is factual is that with enough impressions and clicks you could theoretically make a good decision on a control copy within a 48 hour period but as MASTERFUL said, even still the data can be inaccurate because within a 48 hour period you never know when those ads are being shown so you could theoretically be making a bad decision. In my opinion... If you do run 8 ads, when you make a control copy I would pick more than 1. I would probably pick the top 2 or 3 and eliminate the rest for an additional 48 hour period to ensure I made the right decision. Lastly @ GuyFromChicago, you are right without disagreement we would just be agreeing and patting each other on the backs saying good job. Disagreement is what I enjoy the most on these forums, provided the disagreement is educated, factual and possible because it pushes me to think ourside the box and prove my theories and maybe even redefine my theories and methods or even develop new ones. =)