Lenette is a Senior User Acquisition Expert at Inkitt, a mobile application that publishes novels and books from indie authors. She loves exploring data and digging into the nuts & bolts of how UA works. She has worked across a wide range of markets, including entertainment & kids apps, casual, puzzle, mid-core and hardcore games. With nearly ten years of experience in marketing, Lenette helps Inkitt grow globally using cutting-edge techniques developed for a post IDFA world.
Learn more about Mobile Hero Lenette.
Marketing is not an exact science. Performance marketing has made vast swaths of new data available, but the best UA Managers still understand how to balance data analysis with their intuition. The uncertainties caused by IDFA deprecation, COVID-19, increased competition, and constantly changing ad network algorithms make this balance more important than ever. As marketers, we frequently have to find innovative ways to drive installs when the data we have is incomplete or poorly structured. In this article, I walk through a challenging scenario and explain how I balanced data and intuition to drive results.
Picture This Scenario
Managing an unorthodox Facebook campaign that held a lot of ad spend. There were hundreds of identical campaigns running in parallel, overlapping, and hand duplicated every day: 10x.. 20x.. 30x. Each campaign had a tiny budget and was stuck in the learning stage—never living long enough to become a sizable, stable campaign. Each campaign would be spun up with a dozen different creatives, but even the best ones would only get ten installs a day. We also had a messy attribution that made it unclear which campaign or creative drove which individual install.
Stuck in a Sticky Situation
Surprisingly, the campaign was working and hitting ROAS targets—at least initially. But the daily performance was volatile, and we were afraid of a “house of cards” situation. The snowballing duplications took up too much ad spend, but it was risky to restructure. You might incur an even greater impact on revenue since restructuring might take weeks to optimize properly.
We were anticipating a timebomb—so how do you defuse the situation? By diversifying into other networks. We had to reduce reliance on Facebook and acquire clean data from another network. By growing a leaner Facebook structure, you can gradually phase out any problematic ad spend duplicates.
Drawing Conclusions From Inconclusive Data
In this case, the best way to look at the data we had was to consolidate the data into coarse-graining buckets and analyze them as one big campaign with country/language drill-downs. When you do this, all post-install metrics will be off, so you have to take what you see with a grain of salt.
Despite our reservations, we could still reliably identify the top and worst 20% of performers. With enough basic information, we relied on first principles to form our hunches. To glean insights into creative performance, we simply applied our hunches to set up tests in other networks. I’ll go into details below.
What Are Your Guiding North Stars?
In cases where your data is a big ball of spaghetti, looking at one compounding metric will not be enough. You have to look at contextual metrics to get a clear picture.
Key Metrics: ROAS, CPI, IPM
Contextual Metrics: CTR, CVR, eCPM, Payer Rates
Always understand what your metrics mean and how to adjust them. ROAS is a broad term with dozens of individual levers you can move to adjust it. Depending on your application, some levers may be easier than others to change:
ROAS = ARPU / CPI
ARPU = ARPPU * Payer Rate
CPI = eCPM * IPM
Different factors affect Payer Rate, including the type of users acquired. It might be worthwhile to pay a greater CPI to get users with a much higher Payer Rate.
Boosting ARPPU in some categories of games and applications is easier than others. Coordinating ad targeting, creatives, and new users bundles can have an outsized impact on ARPPU and thus ARPU and ROAS.
When eCPM is high, IPM needs to improve. For instance, top-performing creatives could see lower than usual IPM during Halloween, and spooky iterations of them could bag you higher IPM—offsetting the seasonal price increase.
When looking at data, take the time you need to dive into the details. Let the metrics tell the story when you need additional context to guide your hunches.
Results on One Network Apply to Another
To diversify, we went to another network with no campaign overlap in order to start a clean campaign structure there. If you’re starting anew, it’s important to understand what data is salvageable from the previous data set.
We could not use the heavily duplicated campaign data, but we could consolidate the duped campaigns from Facebook and treat them as one big campaign. Our goal was to find the top 10 creatives from Facebook for each segment. To do so, we sliced the aggregated data into smaller segments and interpreted it by language or country tiers.
In a new network, we structured similar segments and tested creatives to get benchmarks on key and contextual metrics. At this point, it was possible to use the clean results and compare them with the Facebook data. Here is where your intuition helps. I asked myself:
- Are the winners more or less in line with what we see on Facebook? (In other words, how skeptical should we be about the Facebook data?)
- What might account for varied performance across networks? (Different algorithms/ad formats/network traffic types are more/less efficient at finding payers. Payers may have slightly higher/lower ARPPU.)
- Are there similar winners across networks? (If you see something working in multiple networks, trust your hunches. They’re probably replicable.)
- Where can I experiment? (Experiment with everything the network has to offer. Use A/B tests wherever you can and reduce overlaps.)
Once we acquired good data from cleaner setups, we could bring it back to the Facebook account. This is a good general takeaway—kickstart a good structure elsewhere, and bring it back to fix a structure that’s harder to work with.
Creatives Are the Biggest Similarity Across Networks
Overall, here’s what I learned from applying my hunches. Generally speaking, your top creative performers are likely the top performers across all networks, but some adaptation might be necessary for niche ad formats. Experiment with creatives on a low-cost network or campaign and collect learnings that will translate to other premium networks. Creative makes or breaks most campaigns, so you can never optimize too much. Be sure to apply your learnings quickly, even if they’re inconclusive.
If a winning creative doesn’t perform immediately, consider adapting the creative for different ad formats. For example, Facebook App Installs ads vs. Taboola content marketing ads vs. Tiktok Ads. The ad experience is different, so you can’t blindly copy and paste. Imitate the feel and hook for the user.
How to Choose Alternative Networks for Creative Testing
Importantly, Facebook does not have to be the only network for creative testing. Top creatives should be pretty similar across different networks, and the learnings should be transferable. I decide if a network is worthwhile for creative testing in the following manner:
- Use the top 10 creatives on Facebook to get started.
- Look at how metrics for your best-performing creatives evolve over time. You’re looking for signs of stability. For example, Install Rate, Payer Rate, or IPM on the best creatives result in stable numbers and show consistent performance over a week or two. If your creatives are consistently performing, it’s likely a good candidate for creative testing.
- Does the network have the ability to run A/B tests?
Grain of Salt
If your campaigns need too much intuition, the strategy is probably ineffective and broken. Intuition can go a long way, but it is more effective to build a campaign structure that can get you consistent insights without having to resort to hunches unnecessarily. Make your campaign structure clean, lean, and organized. Your results are only as good as your setup. Go back to the first principles when you get lost. Use them to form hypotheses, then test and iterate on the process/plan. Have hunches, test them, and keep iterating.