We recently ran a geo-targeting test for one of our clients in Atlanta and thought it would make an excellent blog post for marketing tips. If you haven’t run a geo-targeting test before, here are some general guidelines on how to do it the right way.
- All tests are intended to be temporary, relatively short-term ‘changes’ designed to test a hypothesis.
- A parallel test procedure provides the most accurate and reliable results. (this entails running two completely identical items, e.g. campaigns, during the exact same time period with the only single controllable difference being the feature being tested)
- A test itself is rarely if ever intended to improve performance per se.
- The results of a test are intended to provide ‘learnings’ that can be applied to make changes that are likely to improve performance.
- The results of a test would then dictate the direction of a more long-term change (or no change at all depending on the results) that would provide a long-term benefit for the account, certain campaigns, ad groups and/or keywords.
- During a test, test campaigns may perform worse overall than the original in a parallel test for several reasons that would not affect the ultimate success of the test itself:
- Newly created campaigns have no Google keyword/campaign quality history. (the geo-test was such a parallel test (as opposed to a less reliable before and after pseudo test))
- Identical campaigns that contain the exact same keywords and have overlapping geo-targeting compete against each other in the AdWords auction (this is necessarily the case for a true parallel test).
- During a test, optimizations and/or changes to improve performance within test campaigns ought to be avoided so as to not introduce differences between the parallel campaigns and add ‘noise’ into the test.
- Ideally a test must be allowed to run for a minimum amount of time required to collect a sufficient volume of data so that a conclusion based on the analysis of the data can be established with statistical significance and confidence (this amount of time will vary widely depending on the rate of data collection)
- Finally a test itself is deemed successful when a clear difference is demonstrated between the two parallel campaigns, ad groups, etc. regardless of the actual conclusion or specifics of the ‘learnings.’
–This report was created by Visiture’s Director of Search, Ben Tan.