- Ambitious Mission Statements, Misaligned Execution
- How I Created the Impact Mindset
- Overuse of Usage Metrics as Success Criteria
- How Grand Ambitions Became Usage Obsessions
- The Pitfalls of Solely Prioritizing Usage
- Measurements of Success Beyond Usage and Usability
- Comprehensive Approach to Metrics for Your Team
- Chapter Recap
How Grand Ambitions Became Usage Obsessions
The formative years of the internet brought a transformative shift in the accessibility of information. A by-product of this was that content that publishers had traditionally nestled behind paywalls in newspapers and magazines found its way online, freely accessible to the masses. In response, many established entities adopted a hybrid approach, offering a selection of free articles to entice readers, hoping to redirect them to their paid offerings. When this failed to boost sales, executives questioned subscription viability and let free content became the de facto standard in hopes of turning the subsequent traffic into revenue another way. As the populace increasingly turned to the web for news, stalwarts like the Wall Street Journal and newcomers like BuzzFeed found themselves vying for attention, with website usage emerging as the primary metric for communicating their success.
Developing a business model around this new digital frontier, media companies began to realize that the ability to monitor user engagement offered a compelling proposition to advertisers, suggesting a direct correlation between eyeballs and advertisement spending.2 A self-perpetuating cycle emerged: heightened user engagement led to increased advertising budgets, facilitating the creation of even more content. Advertisers embraced this model, convinced of its superior return on investment (ROI) compared to traditional avenues. Usage growth became the gold-standard metric communicated within and across boardrooms, intending to garner more advertiser dollars. This trajectory—while initially promising—harbored inherent flaws, which we’ll examine soon. However, a deeper dive is essential into the three systemic forces at play: default to the simplest metric to capture, lack of trust, and the ascendency of the advertisement model.
Defaulting to the Most Straightforward
The first factor is the allure of the usage metric, which lies in its simplicity. At its core, it is a binary of whether someone is engaged. At scale, it shares how many people interacted, their engagement duration, and their frequency of usage. Measuring usage comes naturally to any product or service; teams require a clear picture of whether people are engaging to make fundamental business decisions. The digital realm made these metrics even more accessible. As digital products gained prominence, measuring adoption became synonymous with these readily available metrics.
Product analytics tools, from their early iterations to contemporary giants, have enshrined usage as a cornerstone. Again, this is due to the simplicity of the metric, along with these products’ goal to get customers to use their tool. Most of these products onboard a user by having them complete a usage-based dashboard. Even as these tools have evolved, offering a variety of data streams, usage remains the default starting point in product discussions. This default makes deeper explorations into the nuances of user engagement feel like a tremendous effort compared to what is already available to explore.
Concurrently, two prevailing business philosophies emerged. The first is encapsulated by the adage coined by John Doerr, “measure what matters.” Although this mantra underscores the importance of quantifiable metrics in tracking progress and predicting outcomes, it commonly promotes a path of least resistance in practice. The result? A disproportionate focus on easily attainable metrics, with usage being the prime candidate in the technology space.
In parallel, developing a North Star Metric, or single metric that when increased yields product growth, has become a common step in any product launch. Teams choose a metric that they hope to see spike as they push out their new features, all with the hope it ties back to positive business outcomes. Yet, they commonly fall victim to the trap of defaulting toward the most straightforward metric even without asking whether the single metric captures something that moves the company closer to fulfilling its mission.
Together these philosophies, when done right, are significant progress toward measuring the impact that products create; the problem is they are commonly not executed effectively. Strategic planning sessions, often initiated with ambitious objectives such as improving the effectiveness of a product, tend to narrow their measurement scope when confronted with the limitations of available metrics; the ubiquity of usage metrics in analytics tools further reinforces this trend. Thus, the team scopes its vision around what can be measured rather than valid quantifications of its aspirational goals. In the cases when teams venture beyond usage, they typically gravitate toward usability metrics that are commonly constrained by available tools, often defaulting to rudimentary customer satisfaction (CSAT) measurements or the polarizing NPS.
Measuring what really matters means finding the variables that connect to customer value, help form new habits of usage, inspire the team, and indicate long-term business success. This is even more important when selecting a single North Star Metric for the team to rally around. Airbnb might want people to be on its app searching for more time, but the real value it wants to maximize comes from nights booked. Tinder would like to see the number of swipes increase, but the real value comes from the number of matches or conversations it sees on its platform. These metrics really do matter for its product outcomes.
Unfortunately, when companies focus on the wrong metrics, the leadership still emerges from planning sessions harboring misplaced confidence in the faulty North Start Metrics and subsequent established goals, equating high usage or usability scores with fulfilling customer desires. But what success have they scoped? When leadership sets usage-based goals, it communicates to teams that they should focus on outputs over all else. Employees are mandated to prioritize improvements that enhance engagement, ultimately perpetuating the myth that usage equates to solving a user problem.
Lacking Trust Means Increased Dependence on Leading Indicators
When investors lack trust that a product team they have financed will accomplish their ambitious long-term goals, they tend to overemphasize growth to leading indicators. Due to its straightforwardness, usage stands as a quintessential leading indicator. Alterations to a product swiftly manifest in the initial data points, such as changes in user progression within an engagement funnel and the aggregate user interaction. The pervasiveness of usage metrics is such that they’re often tracked daily, with week-over-week growth figures serving as a testament to their prompt availability.
In contrast, lagging indicators emerge over a more extended period. These metrics are discerned only after user interaction, providing insights into the subsequent effects on individuals or their surroundings. For instance, a budgeting app may immediately indicate the number of new budgets created, but it takes time to evaluate whether users are adhering to these budgets or whether specific interventions increase budget maintenance likelihood.
Leading indicators, while expedient for detecting immediate changes, tend to scratch only the surface. Product managers quickly identify they reveal engagement but fall short of indicating whether that engagement has achieved the objective—respecting that level of understanding requires patience. Yet they face a strong force preventing them from prioritizing the maximization of lagging indicators.
Our current economic framework, which favors the rapid ascent of metrics, often neglects the time necessary to determine if a product genuinely impacts a user. Trust in a product’s ability to achieve its grand vision is scarce, with investors and markets alike fixating on upward trends. This trust deficit is often addressed by emphasizing the growth of leading indicators while hoping they will eventually influence the more consequential lagging indicators.
Consequently, companies are incentivized to prioritize the enhancement of these leading indicators, sidelining the actual impact on users. Similarly, public market investors focus on superficial data that promises immediate returns, often at the expense of considering the enduring effects on a company’s customer base. Trickling down, this lack of trust influences leadership to demand their teams make product changes that grow the leading indicators over all else, further disempowering product managers from pursuing features that will truly transform the customer’s life.
Ascendancy of the Advertisement Business Model: Maximize Engagement to Maximize Profit
Business leaders’ focus on usage as the end-all metric is reinforced by our third primary factor: the ascendancy of the advertising business model. As was discussed with media companies, the intertwined histories of the internet and the advertising industry have shaped our digital landscape. The early days of the app ecosystem, embodied by the beginning of the iPhone App Store, witnessed a traditional software model. Popular apps such as Day One Journal and Scanner Pro became priced commodities like Microsoft Office or Adobe Creative Suite. Advertisers were an afterthought for those who could find a place for ads in the interface.
However, the meteoric rise of platforms like Facebook, offering expansive services at no cost, revolutionized this approach. These platforms positioned themselves as advertising hubs, leveraging user engagement to drive ad revenue. This approach’s success didn’t go unnoticed. In a short span, it became the preferred business model for app developers. The proposition was enticing: why convince users of an app’s monetary value when gaining their time and attention sufficed? Soon, two of the most popular mobile apps were Evernote and Shazam, offered for free due to the backing of advertisers, who were happy to pay to have their content embedded in these products. Popular apps—emboldened by advertiser support—flourished on the basis that maximizing usage and retention would yield increased investment and profit. Building companies around the pursuit of engagement for ad revenue meant usage metrics were organically prioritized, and this belief spread without much question about the value being delivered to actual users.
Championed by industry titans like Facebook and Google, this trend grew with the advent of targeted advertising campaigns, promising enhanced ROI. The underlying premise was simple: increased user engagement equated to richer data profiles, translating to higher ad revenues. This model—dominant through the late 2010s—began showing cracks amid rising data–privacy concerns and questions about the actual value derived from incessant advertising. Yet its influence was profound, shaping the ethos of a generation of tech executives who assume product success can be evaluated using only usage and retention metrics.
The elevation of usage as the primary success metric is thanks to the confluence of its inherent simplicity and the advertising model’s dominance. This focus was further amplified by the tech industry’s growth-centric mindset, often fueled by venture capital. The funding structure these investors chose for many startups during the zero-interest-rate-phenomenon era viewed the pursuit of growth as the end-all metric to maximize. It meant choosing a North Start Metric that would showcase a graph going up and to the right without much concern for what was happening to the users within that graph. Users might love the product or hate it; it works for them, or it might decrease the likelihood they accomplish their desired outcome. In this paradigm, usage aligned with the primary metric that investors cared about without caring that it is indicative of engagement but offers little insight into genuine impact.