I understand your point, GuyFromChicago; it makes sense. It's still quite obvious to me, however, that the more ads you have running against each other, the less chance you will have of identifying the rightful control that the "second round" ads should compete against. I haven't accumulated as much data as you, nor have I ever run a campaign which accrues thousands and thousands of clicks per day, but what I have observed first-hand is that ad performances can vary widely from day to day. Although you will almost certainly end up with one or two winners after 48 hours if all ads received thousands of clicks, there's no telling how the results will look 48 hours later. With that many clicks, the results can change very quickly. All I'm saying, therefore - and I'm sure that you'll agree - is that the less ads there are, the more potentially accurate the results. Of course, if the first round results are very clear, with one ad "flying" compared to the rest, then making a decision based on just 48 hours' worth of stats might be reasonable. I suppose it therefore just depends on how close the first round results are. Anyway . . . I think we're ranting on a bit too much about something which isn't really that important. Still, nice discussion though: it's made me think more thoroughly about split-testing methodology! And, by the way . . . Robertpriolo, great contribution to threads.