Why Most People Misunderstand ARPPU (and How to Think About It Causally)
- Nick Gavriil
- Oct 18
- 2 min read
Causal inference isn’t just a sophisticated toolbox used by statisticians, but a mindset that can be applied to every problem, including business metrics
The Problem
If you’ve ever tried to explain your product’s revenue model, you’ve probably used the comforting formula:
Revenue = Paying Users × ARPPU
It’s simple, elegant, and — unfortunately — misleading.
The illusion of independence
Many product managers and analysts treat Paying Users and Average Revenue Per Paying User (ARPPU) as independent levers.
“Let’s increase Paying Users through acquisition or onboarding.”
“Let’s increase ARPPU through pricing or upsells.”
The logic assumes that these two variables are orthogonal — that changing one doesn’t affect the other. But that assumption almost never holds.
Why ARPPU and Paying Users are not independent
Imagine you raise your price.
ARPPU might go up (each paying user now spends more).
Paying Users might go down (some users churn or refuse to convert).
Your total revenue could rise, fall, or stay the same — depending on which effect dominates. Yet the standard “Revenue = Paying Users × ARPPU” framing can’t capture this trade-off, because it hides the causal path that connects the two.
A causal view of revenue
Let’s visualize what’s actually happening:
Price ───▶ ARPPU
│
└────▶ Paying UsersPrice is a backdoor variable — it influences both Paying Users and ARPPU. If you naively analyze ARPPU trends without accounting for Price, you’ll confound correlation with causation.
In causal terms, observing ARPPU under different price conditions doesn’t isolate the effect of user behavior; it mixes in the effect of pricing changes.
Example: a confounded intervention
Suppose you run an experiment where you increase prices by 20%.
Paying Users drop from 1,000 → 800
ARPPU rises from $10 → $13
If you look only at ARPPU, it seems like monetization improved. But total revenue went from $10,000 → $10,400 — a marginal gain that hides the fact that your conversion rate tanked.
The causal path “Price → Paying Users” explains the discrepancy. The observed increase in ARPPU is not purely behavioral — it’s driven by the price change that filtered out low spenders.
Example: a clean intervention
Now suppose you keep prices constant and test a new onboarding flow that nudges free users to upgrade.
Onboarding ───▶ Paying UsersHere, ARPPU remains unaffected by the intervention. Any change in ARPPU now reflects genuine shifts in spending behavior — not price-induced bias.
This is a clean causal design: no backdoor paths, no confounding, and ARPPU can be safely used as a proxy for monetization quality.
A practical framework
You can think of it like this:

The takeaway
ARPPU isn’t wrong — it’s just not causal. It’s a descriptive metric, not a diagnostic one. Treating it as an independent revenue lever can lead to false confidence and misguided optimizations.
Before you celebrate an increase in ARPPU, always ask:
> “What changed in the system that could have caused this?”
If that change affects both the number of paying users and how much they spend, you’re looking through a causal backdoor — and ARPPU alone won’t tell the real story.



Comments