You create 2 x Ad A and 2 x Ad B. Both versions of each Ad are exactly the same, so now you have 4 ads in the split test. After a reasonable amount of clicks (say 20) you look to see if there is a difference. You only ditch, example A if both versions of A are similar stats, as this then tells you that sufficient data has been gathered. If the 2 versions of Ad A are different this is a sign that one of them could have just been lucky/unlucky. If they are both the same, it's a better marker that the values you are seeing, is true. If they are different, leave the test to 30 clicks. Whilst this will take longer, it would surely be more accurate and saves you dumping an Ad that may well have just had a bad few days when infact overall it would have performed better than the one you kept. Your thoughts?
hmmm an interesting method, but I'm not sure I see the point. Since the traditional method of using just 1 ad for each variation really works just as well. I mean what does this really mean? You see that there's a difference, so you know that sometimes the ad gets a high and sometimes it gets a low. But so what? You knew this anyway, all ads would presumably have good and bad times. The important thing really is if there's a statistical difference in CTR between one ad variation and another. I think your method might be ever so slightly useful if you're testing within a limited timeframe... but then still. If you login to find that one group is "true" but has a lower CTR than the other group - it would still not be wise to delete it right away. Just because its "true" for 20 clicks doesn't mean that same CTR will hold "true" for a longer period of time.
I thought of this method, because if you are testing just 2 ads, A and B and after say 20 clicks Ad A is performing better, it doesn't mean that Ad A will be better for the next 20 clicks. If you are testing 4 ads, and BOTH Ad A(1) and Ad A(2) are better than Ad B(1) and Ad B(2) this, to me makes me more confident that I'm dropping & keeping the right ad. In one of my Ad Groups I tested 4 ads, to see what would happen. Both of Ad A's had a different headline to the Ad B's and that was the only difference. I came back after 20 clicks and Ad A(1) was top on CTR and Ad A(2) was bottom! Even though they were the same Ad. What would I have done had I done just 2 ads and Ad A(1) was top? .. I may well have kept Ad A and dropped Ad B but in this example, Ad B is 2nd & 3rd, whilst Ad A is 1st and 4th ... this tells me that 20 clicks has not been enough to give a clear result and the amount of clicks necessary, will be when both Ad A's (and both Ad B's) have similar data, surely? This method, is making you wait long enough before making a decision, as opposed to stopping the test when you 'think' enough clicks has been made. Anyway, it was just a thought.
What would I have done had I done just 2 ads and Ad A(1) was top? .. I may well have kept Ad A and dropped Ad B Your logic is flawed because if you ran just one ad you would have gotten a CTR equivalent to the average of A(1) and A(2). Which means that the overall CTR would have somewhere between - I suspect this value would have been lower than the average of B(1) and B(2).
this tells me that 20 clicks has not been enough to give a clear result and the amount of clicks necessary, will be when both Ad A's (and both Ad B's) have similar data, surely? Indeed, knowing this is useful. However, the flaw is that both A(1) and A(2) may both show similar CTR too early, signalling that the CTR is ready to be judged as "true". Common sense judges that you need to keep running for an extended period of time to see how CTR will fair over a longer period of time.
You thought of split testing????? Wow you are amazing The fact you set 20 clicks as the threshold shows you did not put much thought in to things.
Dude - you seriously need to reread muchacho post. You're completely failing to understand the technique he is using here.
I am????? LOL ok if you say so.... Odd he forgot that on Mondays results may be different than on Fridays..... I won't even bother to address the 100 other variables that make the post simply a rehash of what 100,000 others have posted. Sign me Unsubscribed
I think you're really missing the point here Sem Advance. Rather just using 2 ad variations he explains why he thinks its a good idea to use 4 ad varaitions, 2 of which are just exact copies of the first 2. So instead of just 2 like this Buy Widgets They're great They're cheap Buy Blue Widgets They're blue They're great He explains why he thinks why setting up 4 like this is a good idea Buy Widgets They're great They're cheap Buy Widgets (a duplicate) They're great They're cheap Buy Blue Widgets They're blue They're great Buy Blue Widgets (a duplicate) They're blue They're great He has an interesting method which has some use, but I generally disagree with using it.
From my point of view its best to write a few different ads and let them run. Then you can split test. If you split test out of the gate you are usually doing testing on Ads you like. and not the Ads users will like.....