)
Demystifying Cohort KPIs, part 2: Analyzing revenue performance
When segmented by acquisition source, install date, or platform, cohort analysis helps teams understand how user behavior changes over time. In Part I of this series, we covered the cohort key performance indicators (KPIs) of retention and sessions, which measure how often users return and how they interact with an app after install. These metrics make it possible to identify drop-off points, analyze user stickiness, and understand behavior across the early stages of the lifecycle.
In this second installment, we’re focusing on revenue KPIs: showing how much users spend, how spending evolves after install, and how revenue accumulates over time, so teams can easily assess when acquisition efforts begin to pay off. Just as retention KPIs reveal who stays, revenue KPIs reveal which users generate value and when they do so.
In this article, we explain how each revenue KPI is calculated, how to interpret it, and how it supports strategic decisions in user acquisition (UA) and monetization planning.
Why revenue KPIs matter for app growth
Long-term app business growth depends on the value users generate after installing. Revenue KPIs measure this value by linking user behavior, such as purchases, subscriptions, or ad interactions, to actual income. This helps teams understand how engagement translates into business outcomes.
These metrics become especially useful in cohort analysis once early engagement patterns emerge. At this stage, marketers can evaluate which acquisition channels, product experiences, or creative variations deliver the most valuable cohorts. These insights inform strategic decisions around budget allocation, UA bidding, and revenue forecasting.
Beyond tracking total revenue, revenue KPIs also reveal how that revenue is distributed within a cohort. They indicate whether monetization is driven by a few high spenders or spread more evenly across the user base, which is critical for assessing revenue sustainability and growth potential.
When revenue metrics like lifetime value (LTV) or average revenue per user (ARPU) shift, they can also help teams identify the cause. Changes may reflect improved retention, higher average spend, or better conversion rates. Understanding these patterns allows marketers to pinpoint what’s driving performance and where further optimization is needed.
In the following sections, we’ll explore how to calculate and use these revenue KPIs effectively.
How revenue events power KPI analysis
Revenue KPIs, such as LTV, ARPU, and return on ad spend (ROAS), depend on accurate monetization data. This data comes from in-app events that record revenue when users make purchases, subscribe, or engage with ads. Without properly configured events, it’s not possible to calculate these metrics reliably at the cohort level.
With Adjust, revenue events are implemented as custom in-app events. For purchases, this means triggering an event that includes a revenue value and a transaction ID to prevent duplicate reporting. Purchase verification, if enabled, adds a layer of fraud prevention by confirming transactions with the app store. While this setup is specific to Adjust, the broader principle applies across attribution platforms: revenue must be tracked at the event level to enable meaningful post-install analysis.
Each monetization model requires its own tracking logic. In-app purchases are typically captured with revenue events. Subscription apps log trial starts, renewals, and cancellations through predefined event types. Ad revenue is collected through integrations and may be reported at the user or impression level, depending on the setup.
When configured correctly, these events provide the foundation for measuring user value, comparing performance across campaigns, and analyzing monetization behavior over time. Whether you're tracking day 7 LTV or evaluating how revenue is distributed across a cohort, every insight depends on having accurate, consistent event data in place.
Revenue tables: A closer look at user monetization
In short, revenue tables are used to display the amount of revenue a cohort generates over time, segmented by install date and indexed by the day after install (DAI). This structure helps teams analyze monetization patterns across the user lifecycle and compare trends across cohorts.
To illustrate, consider a cohort of users who all installed the app on the same day. The table below shows the total revenue this cohort generated on each day after install and when spending occurs:
In this example, most revenue is generated in the first two days, with minimal activity after day 4—a pattern common in apps with early monetization models.
While helpful in visualizing when revenue is generated, raw totals have limitations. They don’t reveal how many users contributed, whether a few high spenders drove the revenue, or if all users have had time to monetize. For example, two cohorts might show the same total revenue by day 7, but differ significantly in size, resulting in very different per-user value.
To draw meaningful conclusions, this data must be normalized. That’s where per-user metrics, such as ARPU, come in.
Average revenue per user (ARPU)
ARPU measures the average revenue generated per user by dividing total cohort revenue by the number of users who have reached a given day after install. This normalization enables the comparison of monetization performance across cohorts, regardless of their size.
In the formulas below, n is a variable that stands for a specific DAI. For example, if n = 0, the formula calculates the value for day 0; if n = 7, it refers to day 7 after install.
Formula:
ARPU (DAI n) = revenue on DAI n ÷ cohort size on DAI n
For example, let’s use a cohort of three users who generated the following daily revenues:
This table illustrates how ARPU evolves over time, reflecting both revenue growth and cohort attrition. For example, ARPU on day 2 is higher than on day 0 because there are fewer users, even though total revenue is lower.
ARPU is beneficial for evaluating UA efficiency. By comparing ARPU across cohorts from different campaigns or channels, marketers can identify which sources deliver higher-value users, independent of install volume. Daily ARPU shows when value is created, while cumulative metrics reveal how value builds over time.
Revenue total
The revenue total shows the total amount earned by a cohort on each DAI, capturing the cumulative sum of revenue from day 0 through that point. This reveals how value builds over the user lifecycle and whether monetization accelerates, slows, or plateaus.
For example, consider a cohort that generated the following daily revenue:
By day 2, the cohort has accumulated a total of $32 in revenue, combining purchases from day 0 and day 2. The total remains constant when no new revenue is added, making it easy to identify when monetization levels off.
This metric helps evaluate how quickly a cohort earns and identifies monetization patterns. Some cohorts may front-load revenue on day 0, while others increase gradually over time. These trends influence decisions around campaign pacing, budget allocation, and LTV modeling. When compared to the cost per install (CPI), the total revenue also helps determine how long it takes for a cohort to break even.
Finally, the revenue total provides the foundation for downstream KPIs, such as LTV, which normalizes total cohort revenue by user count to assess per-user value.
Revenue total in cohort
Revenue total in cohort measures the cumulative revenue generated by users who have reached each specific day after install. Unlike the revenue total, which aggregates all revenue regardless of how long users have been active, this metric includes only revenue from users who have had time to reach that day. It adjusts for cohort maturity, enabling more accurate analysis of incomplete or early-stage cohorts.
Let’s look at a simplified example. Suppose a cohort of three users all installed the app on the same day. Their individual revenue contributions over time are as follows:
Each user can only generate revenue on the days they’ve reached. For example, only user 1 reaches DAI 5, while user 3 hasn’t progressed beyond DAI 1.
Now let’s compare the two cumulative metrics:
In the early days, all users reached the corresponding DAI, so both metrics match. But from DAI 4 onward, only a portion of the cohort has reached those days. The revenue total continues to accumulate from past purchases, but the revenue total in a cohort adjusts to include only users who are eligible on each day.
This distinction becomes critical when comparing cohorts at different stages. For example, if one cohort is only three days old, its DAI 7 revenue total will appear low simply because most users haven’t reached that point yet. The revenue total in cohort provides a maturity-adjusted lens for fairer comparison and more accurate interpretation.
Tip: Use this metric when evaluating early-stage performance, tracking time-based monetization trends, or modeling LTV curves to avoid skewed results.
Lifetime value (LTV)
LTV shows the average revenue per user in a cohort by dividing the revenue total in the cohort by the original cohort size.
Formula:
LTV (DAI n) = revenue total in cohort on DAI n ÷ cohort size
Let’s use an example. A cohort of three users generates the following revenue:
Here’s how LTV develops alongside revenue total in cohort:
At the beginning, all users had time to reach each DAI, so LTV reflects the whole cohort’s activity. As the cohort matures and fewer users reach later DAIs, the revenue total in the cohort reflects a smaller active group. Since LTV still divides by the original cohort size, the metric may temporarily decline before stabilizing once all users reach maturity.
LTV is a key metric for evaluating profitability. When compared to CPI, it shows whether a cohort is generating a return. For example, if the day 30 LTV is $3 and the CPI is $1, the ROAS is 300%. Additionally, many teams use an LTV: customer acquisition cost (CAC) ratio of 3:1 or higher as a benchmark for sustainable growth, though the ideal ratio depends on the app’s business model.
Over time, tracking how LTV accumulates helps teams forecast payback periods and assess revenue potential. In more advanced setups, predictive LTV (pLTV) models can project future user value based on early behavior, helping UA teams optimize spend and scale efficiently.
Paying users lifetime value (PU-LTV)
Paying users lifetime value (PU-LTV) measures the average revenue generated by users who have made at least one purchase. It is calculated by dividing the cohort’s total revenue by the number of paying users:
Formula:
PU-LTV = revenue total in cohort ÷ number of paying users
Unlike LTV, which includes all users, whether or not they spend, PU-LTV focuses only on monetized users. This makes it especially useful for understanding how revenue is distributed among those who contribute financially.
Comparing PU-LTV to LTV reveals the depth of monetization. A high PU-LTV, paired with a low LTV, often indicates a reliance on a few high-value users (or “whales”). A smaller gap suggests that monetization is spread across a broader portion of the cohort.
This metric is also valuable for segmentation and targeting. Teams can use PU-LTV to identify which channels or campaigns attract the highest-value spenders, refine lifecycle marketing strategies, and inform pricing or offer design decisions. Because it isolates revenue from actual contributors, PU-LTV is a key tool for diagnosing monetization performance and optimizing retargeting strategies.
Common pitfalls and best practices in revenue cohort analysis
Revenue KPIs are only meaningful when interpreted in the context of the overall business. Misinterpreting early signals or skipping key adjustments can result in flawed conclusions and missed opportunities. Below are the most common mistakes to avoid, along with best practices to help teams make smarter, more consistent decisions.
Don’t compare incomplete cohorts
If most users in a cohort haven’t reached later days after install, metrics like revenue total or LTV will appear artificially low. This makes early-stage cohorts seem underperforming. Use revenue total in cohort to compare only users who’ve had time to contribute, and wait until a cohort has matured before drawing conclusions.
Normalize by user count
Raw revenue doesn’t reflect cohort size. Larger cohorts often generate more revenue simply due to scale, not user quality. Always use per-user metrics, such as ARPU or LTV, to compare performance across acquisition channels, regions, or campaigns.
Don’t rely only on short-term ROAS
Early ROAS metrics, such as day 3 or day 7, can be useful, but they don’t capture long-term value. Some apps monetize more slowly. Overweighting short-term results may lead you to kill campaigns that would have broken even later.
Always factor in acquisition cost
Revenue KPIs must be evaluated in conjunction with CPI or CAC to assess profitability. Comparing revenue to cost helps identify break-even points, optimize bids, and cut spend on underperforming segments before it compounds.
Use the right timeframes
Revenue maturity varies by app model. Gaming cohorts may peak by day 7, whereas subscription apps tend to take longer. Set benchmarks like day 7 day 14, or day 30 based on your typical user lifecycle to ensure fair comparisons and avoid false negatives.
Track trends, not just snapshots
Use ARPU to identify monetization spikes or drop-offs, and track revenue total to see how value builds. Layer these with in-app events (e.g., purchases, subscriptions, ad impressions) to understand what’s driving revenue and when. This helps tie user behavior to financial outcomes.
Turning revenue insights into strategic growth
Revenue KPIs are crucial for measuring how user engagement translates into business value. When combined with retention metrics, they offer teams a comprehensive view of cohort performance, enabling them to make smarter growth decisions.
Solutions like Adjust Datascape make these insights accessible in real-time. With customizable cohort tables, flexible filters, and side-by-side comparisons, teams can track revenue accumulation by cohort, compare trends across campaigns, and identify what drives monetization at each stage—all in one place.
Coming up soon in Part 3, we’ll complete the series by exploring KPIs for event conversion. You’ll learn how to measure user journeys, analyze funnel progression, and use event-based cohorts to optimize in-app behavior and monetization outcomes.
Ready to take your mobile app performance to the next level? Request a demo to see first-hand how Adjust can enhance your cohort analysis and accelerate your app growth.
Be the first to know. Subscribe for monthly app insights.
Keep reading