During the holiday season, there’s an extra focus on the naughty and nice list.
Successful campaigns get access to additional resources, while failed initiatives pivot or retire.
This month’s question gets to the heart of digital marketing optimization and scale. Garland from Orlando asks:
“At what point do you consider a campaign test as failed? e.g. $5,000 spent on data and little return on spend.”
In this post, we’ll dig into understanding success/failure signals, as well as unpacking how to establish them for your brand.
This question invites a lot of variables, so we’ll do our best to tackle the most common ones.
Setting Up Reasonable Tests
Before initiating any digital marketing test, it’s really important to set success and failure measures.
The most important foundational step is confirming what are absolute knowns (i.e., do you trust your conversion tracking, are your form-fills working, is your sales team solid, etc.).
If these foundational items are not correctly set, it won’t matter how well the variables you’re testing perform.
This is why it’s crucial to bake at least one to two months into account set-up.
Beyond clearing learning periods, you’ll ensure your performance reflects true success.
It’s also vital that tests are only testing one variable at a time.
If you set out to test everything at once, you’ll struggle to have definitive conclusions on whether the variables had positive or negative impacts on campaigns.
Finally, it’s important to note that all digital ad networks have different learning periods and rules of engagement to effectively communicate with the algorithm.
For example, Google requires a minimum of five days, whereas Facebook (Meta Ads) requires meeting a conversion threshold.
Defining Successes And Failures
Once you’ve set up your foundational conditions, you can begin to establish what success and failure look like.
If you’re testing for improved conversion rate (CRO), the tests will likely focus on the following levers:
- Landing pages: Do they inspire more, less, or the same amount of engaged users?
- Ads: Do they have a healthy click-through rate (CTR) to conversion rate?
- Targeting choices: Does the group of people targeted represent better, worse, or the same conversion rate and value?
Return on ad spend (ROAS) tests will focus on the following choices:
- Auction price: Are the auctions the campaign enters conducive to better, worse, or the same ROI?
- User Journey: Is the user being guided in a way that lends itself to higher, lower, or the same conversion value?
- Creative: Does the creative help prequalify customers better, worse, or the same as before?
Testing a new channel requires slightly different considerations:
- Ease of maintenance: Can you reasonably build and maintain a campaign on the new channel, or will it require completely different resources?
- Market value: Does this channel have a high concentration of your best customers, or is it new ground?
- Budget: Have you allocated enough budget for the channel?
- Target: Is your target audience on this channel?
You’ll want to give any initiative at least 60 days to prove itself out; however, if there are clear signs of failure, you’ll want to adjust.
Clear Signs Of Failure
The following should be taken as clear signs of failure in accounts.
- The campaigns can’t spend after more than five days.
- Conversions in the account aren’t translating to quality leads/sales.
- Spend spikes are much higher than normal spend pacing.
- Variables being tested yield worse results than the control.
Final Takeaways
It’s easy to feel like any spend that doesn’t result in revenue is waste – but it’s never a waste if you’re learning something.
Make sure your foundational data points are established as well as honoring original success/failure signals.
Have a question about PPC? Submit via this form or tweet me @navahf with the #AskPPC hashtag. See you next month!
More resources:
Featured Image: Paulo Bobita/Shutterstock