Glossary

Product Analytics Metrics Hierarchy

A product analytics metrics hierarchy is the structured framework that organizes a product's metrics from the highest-level north star metric through leading indicators to diagnostic metrics — enabling every team to understand how their daily decisions connect to the company's most important outcome, and to quickly identify which specific lever is driving changes in the north star.

?

How do SaaS companies identify and define their North Star Metric?

The North Star Metric (NSM) is the single metric that best captures the core value the product delivers to customers — and that, when it grows, is most reliably correlated with long-term business success. It occupies the peak of the metrics hierarchy because every other metric should explain movements in the NSM. Selecting the right NSM requires answering: what is the specific action customers take that, when they do it frequently and deeply, means they are receiving full value from the product? Good NSM examples: Slack uses "Messages Sent" (when teams send messages, they are communicating in Slack rather than email — the core value); Amplitude uses "Weekly Querying Users" (users actively extracting insights from the product); Zendesk uses "Tickets Resolved" (the core job of every support team using the platform). Bad NSMs: revenue (a lagging indicator, not a direct measure of value delivery); registered users (a vanity metric detached from actual usage); or page views (a proxy that can grow for the wrong reasons). The NSM should be understandable by every person in the company and directly improvable by multiple teams.
?

How should Product Ops design a metrics tree connecting north star to daily operational metrics?

A metrics tree (also called a driver tree or KPI tree) is a hierarchical diagram that shows mathematically how lower-level operational metrics combine to produce the north star metric. Construction process: start with the NSM at the top. Ask "what are the two or three factors that, when multiplied or summed, produce the NSM?" These become Level 2 metrics. For each Level 2 metric, ask the same question — what are the factors that produce it? These become Level 3 metrics. Continue until you reach metrics that specific teams can directly influence through their daily work. Example partial tree: NSM = Weekly Active Users × Features Used Per User. Level 2a: Weekly Active Users = New Users Acquired × Week-1 Retention Rate. Level 2b: Features Used Per User = Breadth of Features Adopted × Depth of Usage Per Feature. Level 3a under New Users Acquired: Trial Starts × Trial Activation Rate. Product Ops builds and maintains this tree in Notion or a BI tool, and uses it in every product review meeting to explain what is driving NSM changes: "NSM declined 5% this week — Level 2 analysis shows weekly active users is flat but features per user declined: Level 3 shows breadth adoption dropped specifically for Feature X."
?

How do product teams use diagnostic metrics to identify root causes of north star movements?

Diagnostic metrics (also called "guardrail metrics" or "health metrics") sit below the NSM in the hierarchy and are investigated when the NSM moves unexpectedly. The principle: a north star change has a specific, identifiable driver that can be found by traversing the metrics tree. Effective diagnostic investigation follows the tree structure: if NSM declined, check Level 2 metrics — did the decline come from the user count branch or the usage depth branch? If Level 2 shows user count is the driver, check Level 3 — is the decline from acquisition (fewer new users) or retention (existing users churning faster)? If retention is the driver, check Level 4 — is the decline concentrated in a specific user segment, a specific product area, or a specific cohort from a specific acquisition period? By the time the investigation has traversed three or four levels, the root cause is typically specific enough to be actionable: "7-day retention for users who signed up via the Product Hunt launch declined from 45% to 28% compared to our standard cohort — the users from that campaign skew toward smaller teams who are hitting the free plan feature limit before they can fully activate." This specific finding enables a targeted response, not a broad unfocused improvement effort.

Knowledge Challenge

Mastered Product Analytics Metrics Hierarchy? Now try to guess the related 5-letter word!

Type or use keyboard