The Learn-Accelerate-Scale Model: Firing Small Bullets Before Firing Cannonballs

Abacus is a digital performance agency with a driving force that helps companies achieve their growth objectives, specifically through mobile campaigns. Denis Melnik is a Performance Team Supervisor and his featured post highlights the importance of hypothesis testing for your digital campaigns.

Abacus has been preaching the value of data-driven performance advertising since its inception—which is 3 solid years of managing and optimizing large paid social campaigns. And their insights are more relevant than ever at the start of Q4. As we enter the end of the year, we will increasingly notice the hyper-aggressive bidding war erupt on all advertising platforms. It’s the time of the year marked by large advertising budgets going down the cliff producing a hysterical amount of ad impressions—although many of those impressions will go unnoticed.

The cost of an eyeball, even a slight hesitation to scroll past an ad in the feed, has become a significant price marketers pay these days. It’s only the strongest, the most captivating, and user-relevant creatives that survive.

What if you could approach the ideation, the setup, and the optimization of your campaigns differently, stacking more odds in your favour? What if you could get more bang for your buck from the ads that you create and push to platforms this holiday season and forever after? Here are some practical tips from the Abacus performance team based on their proprietary method of launching and scaling paid social campaigns. This method is called “LAS”, short for Learn Accelerate Scale.

Step 1: Start with a hypothesis in mind

Many advertisers rush to push ads via Campaign Budget Optimization (CBO) setup without pre-screening the elements that go into campaigns: audiences, bidding, placement, and creative pieces. We suggest running small-scale split test campaigns first before pushing anything into the evergreen bucket where a platform’s machine learning will take over the optimization. Starting with 10-15 “hypothesis tests” would be enough to focus on key parts of your campaigns.

A hypothesis is nothing more than a clearly defined statement that outlines the variables that will be tested, what elements will be fixed, and what budget will be committed to the test. Split tests are valuable because they force the learning stage to happen faster than in a regular non-test campaign environment. Based on the winning variables, full-blown funnelled campaigns can be architected relying on some of the best automation tools that a platform offers. Even with artificial intelligence working for you, don’t be fooled—a lot of work is still on you, and not the machine.

Step 2: Have a game plan before you launch your tests

Quite often, we dive into exploring how new ad elements will work without thoroughly thinking through the expectations for the test or what to do after it is completed. It is important, however, to have a game plan for when all your initial ideas are tested and results are obtained. Here are 5 critical questions to ask before you launch your experiments:

  1. What is your expected result from the test? What do you expect to happen at the end of the test? How many conversations are you expecting based on historical conversion rates and budgets you are setting? 
  2. How much will it cost you to get 5 key conversions you optimize for in the test? The desired number could be much higher than 5 if conversions are inexpensive and/or based on high-frequency events—such as landing page views or video views. 
  3. Which best performing fixed variables will you add to your test? In other words, what variables will remain unchanged across all testing buckets?
  4. How will you handle poor performance? Should you setup an automatic rule to pause ad sets or ads in the event conversions do not come in at the expected rate? How will you minimize your losses and pivot from there?
  5. What will you do with a winning variable? Which campaign or ad set will you add it to? How will you scale that variable as part of ongoing campaigns?

These are all rather technical questions which require some thinking before launching test campaigns. We encourage you to always go through them in order to be clear about what your test is intended to do for you. If you visualize the success in such grand detail, you are more likely to achieve it or, at the very least, not be caught by surprise if the campaigns do not perform according to your expectations.

Step 3: Focus on tests that matter

Yes, there are hundreds of variables that you could test in your campaign. While some elements—such as background image colours—may lead to better performance, we suggest spending more time thinking of elements in the campaign setup that could contribute to better than marginal gains in volume or CPA reductions. Based on our experience of running 1,000+ tests this year, here are the monumental areas to optimize for:

Landing pages

Test between home, collections, lead form, and product pages to determine where conversion rates are the highest. Depending on the conversion event in the customer journey stage, you may find multiple landing pages that do the job well for each of the stages. Do not rely on a single landing page—even if the person who designed and coded it begs you. It is better to make decisions based on data-driven insights, rather than simply a gut feeling or personal preference.

Creative types and messaging

This is easily the most testable of all elements bound only by your creative team’s resources and time. Start by testing the elements you already have, giving them all an equal chance to perform under the stringent holdout group conditions of split tests. Put your top static, video, carousel, or collection ads head-to-head and let the best one rise to the top. Then do the same for your short, mid and long-form videos. Messages and ad copy nuances could be reserved to Dynamic Creative Optimization (DCO) testing, but the big differences in performance should become apparent after your test campaigns. 

Regional differences (rule-based segmentation in the U.S.)

Overlooked by many, United States as a country offers a large geography to test pockets of users in different states that would yield different CPA metrics. How to approach it? Rank your historic performance by regions using some metrics such as CTRs, CPMs, Cost Per Action, or some conversion rate metric (example: % of users who reached the cart from page visitors). Put a few lists together based on the metrics that matter to your objectives and test the geographies against each other, pairing them with your top lookalike or custom audiences. We often see huge differences in performance when states are divided into chunks; nationwide targeting is only reserved for retargeting level campaigns. 

Audiences: seed lists for lookalikes

Website visitors are not necessarily the best group of people to build a lookalike audience from. It is much more effective to build a lookalike audience based on a list of users who have actually engaged with your content. For example, you can create buckets of lookalike audiences based on engagement signals on your website (website conversion events), in video ads (video views), and on social pages (visits or engagement with posts). Test between audience types (switching lookalike sources) and within audience buckets by changing the lookalike sizes.

If you read Jim Collins’s “Good to Great”, then you are familiar with the concept of “firing small bullets before firing cannonballs”. This testing methodology we have outlined is exactly that. The good ones may use some of it, but the great ones use all of it all the time, creating new and better hypotheses to test. The CPM spike season is upon us, and now you have been warned.

Start firing those tests today.

– Denis

You may also like: