Erik on product management
and such
Say hi
“Brand attitude is a consistently strong influencer on other UX metrics. Understanding what people think about an organization that creates an interface (e.g. Facebook, Amazon, Walmart) can explain a lot of the variation in other attitudinal metrics (like NPS, SUPR-Q, SUS, and satisfaction) and words people associate with the brand. We consistently seen brand attitude as a key driver of UX metrics in our industry reports.”
- MeasuringU
If you work in a company that has been growing globally and quickly launching your product in a couple of new markets per year, you may eventually find yourself wondering why the same product performs so differently in different markets.
What’s going on with Market B? You’ve run through the same localization as for markets C and D. You‘ve added the top payment options. The auth options. It doesn’t seem to specifically be because of any of these factors because not only is conversion rate lower, people just spend less time in your product and view fewer pages. They also tell you they’re unhappy. So how do you figure out what causes this gap in product metrics?
You can go ahead and look at more granular data and run experiments to validate numerous lower level hypotheses but that is not going to help because the cause is not inside the product. It’s outside the product: the conditions for lower user engagement are created before users even download your app.
Product management culture tends to pride itself on rational, experiment-driven, customer-centric design. We focus on usability, clarity, performance, and emotional design cues within the product itself. But there’s an unspoken assumption baked into this: that the user’s experience starts at first touch; when they download an app, land on a page, or open a feature.
In reality, the user’s experience with your product begins far earlier. It begins the moment they recognize your brand name, form an impression of what your company represents, and decide whether they trust you enough to even bother interacting. That pre-experience context colors every subsequent click, form field, and feature you put in front of them.
Traditional product frameworks prioritize quantifiable in-product metrics and tangible experiments. We’re trained to optimize funnels, reduce load times, and minimize friction. All critical, but incomplete.
Brand perception feels squishier, and historically “belongs” to marketing or comms. But this separation is outdated. In a digital-first world, product experience is brand experience. The way your onboarding feels, the transparency of your pricing, the tone of your error messages, not to mention the ads you run and the earned media your PR team generates; these directly shape brand attitude in real time.
- Brand awareness: The degree to which users recognize or recall your brand. Are you top-of-mind in your category? Do users remember your name in a relevant moment of need?
- Brand attitude: The overall sentiment or evaluation people hold about your brand — whether they see it as modern, trustworthy, confusing, exploitative, innovative, or indispensable.
- Brand trust: The confidence users place in your brand’s ability to deliver reliably and ethically over time. It’s the currency that lets customers forgive small mistakes and stick with you through imperfect releases.
While product managers may not own these brand dimensions directly, they are powerful determinants of UX outcomes, and ignoring them is a strategic mistake. Even if PMs don’t directly own brand strategy, we should integrate brand awareness and attitude metrics into our product thinking in multiple ways.
- Monitor brand health metrics: Track brand awareness, trust, and sentiment in parallel with UX metrics. Incorporate brand lift studies, social sentiment analysis, and customer perception surveys alongside retention and conversion dashboards.
- Partner with marketing early: Don’t wait until launch week. Bring brand and comms teams into product planning discussions, especially for high-profile features or pricing changes.
- Design for brand moments: Recognize moments in your product that are disproportionately brand-shaping: onboarding, pricing pages, errors, cancellations. Optimize these not just for conversion, but for brand reinforcement.
- Use brand context to interpret product metrics: When a feature underperforms in one market but thrives in another, don’t just blame UX. Investigate brand awareness and attitude differences as part of your diagnosis.
A reasonable critique of this thesis is: “Interesting theory, but where’s the evidence?” While industry benchmarking studies hint at the relationship between brand perception and UX metrics, product teams should validate this within their own reality. Here’s a clear, testable method for doing exactly that. Let’s start with the hypothesis.
The hypothesis
As brand awareness, familiarity, and trust increase throughout a customer’s lifecycle, UX metrics (conversion rates, engagement rates, satisfaction, and tolerance for friction) improve even when interacting with the same interfaces.
In other words: the same product experience feels better and performs better when users arrive with a stronger brand relationship.
Here’s what it would look like:
User journey: From brand encounter to first in-app experience
Stage 1: Pre-app experience (Brand impression phase)
Touchpoint | User Thought/Feeling | Impact on Brand Perception |
---|---|---|
Ad on social media | “Looks trendy… but is it reliable?” | Curiosity or skepticism |
Recommendation from a friend | “They loved it — must be good.” | Increased trust |
Review site or influencer video | “Mixed reviews — might be buggy.” | Hesitation, expectation of issues |
News headline or brand story | “They just raised $50M — serious company.” | Credibility boost or concerns |
Encountered customer service horror story on Twitter | “Yikes. Will they ghost me too?” | Wariness about support reliability |
→ Cumulative Effect: User forms a mental brand attitude: trusted, risky, cool, cheap, premium, frustrating, generous, etc.
Stage 2: First in-app interaction
UX Moment | How Brand Attitude Shapes It | Example |
---|---|---|
App download and login screen | Expectation of smoothness or friction based on prior impression | “Hope it’s not glitchy like that review said.” |
Onboarding flow | Patience level influenced by brand trust | Forgiving of clunky tutorial if brand trust is high |
First key task (e.g. checkout, booking) | Anxiety, optimism, or skepticism depending on earlier signals | “Will it actually work as promised?” |
→ Effect: Positive brand perception softens minor UX flaws. Negative perception amplifies every small annoyance.
Stage 3: Post-interaction reflection
Outcome | Influenced by Brand Perception? | Example |
---|---|---|
User satisfaction | Yes — experience judged against expectations set by the brand | “Surprisingly good for a no-name app.” / “Not what I expected from a premium brand.” |
Likelihood to recommend | Yes — social proof and brand affinity matter | “I’ll tell my friends — they’re going to love this.” / “Not worth mentioning, even if it worked.” |
Willingness to retry after issues | Higher if brand perception is positive | “Everyone says it’s solid — maybe I just had bad luck.” / “Figures. Won’t bother again.” |
On the other hand, what if brand awareness is not yet there before first touch. This is how a non-aware user might experience the product, especially if they believe interactions or visuals are flawed.
Alternative user journey: Absence of positive brand experience pre-app use
Stage 1: Pre-app experience (Negative or absent brand awareness)
Touchpoint | User Thought/Feeling | Impact on Brand Perception |
---|---|---|
Never heard of the brand | “What is this? Who are these people?” | Zero trust, suspicion |
Ad with vague or hype-y messaging | “Feels scammy or low-effort.” | Distrust, cynicism |
No personal recommendations | “Nobody I know uses this — risky?” | Hesitation, no social proof |
Mixed or bad online reviews | “People seem annoyed by this app.” | Low confidence |
Stale or negative content | “Oh — aren’t they the ones who botched that feature?” | Negative bias, suspicion |
→ Cumulative Effect: User approaches the product defensive and hyper-critical, expecting flaws.
Stage 2: First in-app interaction
UX Moment | How Weak Brand Attitude Shapes It | Example |
---|---|---|
App download and login screen | High skepticism: every small delay or bug reinforces doubts | “Ugh, took forever to download — bad sign.” |
Onboarding flow | Lower tolerance for friction or complexity | “Why do they need my email now? Annoying.” |
First key task (e.g. checkout, booking) | Impatient, quick to abandon if any hiccup occurs | “See — glitchy. Not worth it.” |
→ Effect: Negative or absent brand perception magnifies minor UX issues into trust-breakers.
Stage 3: Post-Interaction Reflection
Outcome | Influenced by Negative or Absent Brand Perception? | Example |
---|---|---|
User satisfaction | Lower, even if objective UX isn’t terrible | “Nothing special. Probably deleting it.” |
Likelihood to recommend | Very low without social proof or good first impression | “Don’t bother — it’s sketchy.” |
Willingness to retry after issues | Minimal — one frustration triggers app abandonment | “Knew it’d be bad. Uninstall.” |
We’ll continue with different methods for validating the hypothesis that brand awareness influences in-product user behavior.
Validation method 1: Lifecycle-based UX metrics segmentation
Step 1: Segment users by brand familiarity/relationship stage
Create clean, mutually exclusive cohorts based on both product engagement and implied brand familiarity. For most consumer or SaaS products, these look like:
-
Non-users / first-time visitors (no prior sessions, no sign-up)
-
New users (signed up, but no transactions)
-
New customers (completed one transaction)
-
Returning customers (2–4 transactions)
-
Loyal customers / power users (5+ transactions or 90+ days active)
Optionally enrich these segments with self-reported brand perception data via lightweight surveys (brand trust scores, brand NPS, or a 5-point brand favorability scale).
Step 2: Track and compare key UX metrics across segments
For each user group, track:
-
Interface interaction metrics:
-
Click-through rates (CTR) on CTAs
-
Conversion rates (CR) on forms or purchases
-
Activation rate (first meaningful action post-signup)
-
Onboarding completion rate
-
Feature engagement (DAU/WAU, feature usage per session)
-
Click-through rates (CTR) on CTAs
-
Session quality metrics:
-
Session duration
-
Pages/screens per session
-
Bounce rate / abandonment rate
-
Error tolerance (number of minor friction points before exit)
-
Session duration
-
Attitudinal metrics (if possible):
-
In-app satisfaction scores (SUPR-Q, SUS, CSAT)
-
Brand perception scores (e.g. “How trustworthy is this brand?” on a 5-point scale)
-
In-app satisfaction scores (SUPR-Q, SUS, CSAT)
Step 3: Analyze for progression patterns
Compare how these metrics shift across the user relationship stages.
If the hypothesis holds, you should observe:
-
Higher CTRs and CRs among users with stronger brand familiarity (e.g., returning customers vs. first-time visitors)
-
Higher engagement and feature adoption rates for loyal users vs. new ones
-
Higher tolerance for minor UX flaws among long-time users (fewer bounces on slightly slow or clunky experiences)
-
Better self-reported satisfaction despite identical or similar interface flows
Example hypothetical findings
Metric | First-time Visitor | New User | Returning Customer | Loyal Customer |
---|---|---|---|---|
CTA Click-Through Rate | 3.2% | 5.5% | 8.9% | 11.2% |
Form Conversion Rate | 9.1% | 14.4% | 20.3% | 24.7% |
Average Session Length | 1.5 min | 2.3 min | 3.8 min | 5.2 min |
Activation Rate | 48% | 63% | 78% | 84% |
In-app Satisfaction | 3.6/5 | 4.1/5 | 4.4/5 | 4.6/5 |
Even when interacting with the same product flows, more familiar and trusting users engage more deeply, convert better, and rate the experience more positively.
Validation method 2: Market-based UX metrics segmentation
This is especially relevant when launching your product in new markets. Users in markets where you have low brand awareness will have lower engagement rates across the entire user journey. Validating this hypothesis is evidence for the need to invest in brand marketing in new markets, alongside localizing the product. The user journey starts before they land on the home screen - it starts outside of the product.
If your product is already live in multiple markets, you can validate the hypothesis that positive brand awareness and attitude is a multiplier on product metrics by categorizing markets a having received high, medium, low, or no investment in brand awareness (time in market can also be considered as investment) and segment users based on these markets or market categories, and you would probably end up with something like this:
Market Investment | Conversion Rate (%) | Session Time (min) | Pages / Session | Feature Usage (%) | CSAT (1-5) | Bounce Rate (%) |
---|---|---|---|---|---|---|
High Investment | 18.5 | 6.1 | 8.2 | 58 | 4.7 | 11 |
Mid Investment | 12.7 | 4.3 | 6.0 | 40 | 4.1 | 20 |
Low Investment | 8.3 | 3.1 | 4.5 | 27 | 3.6 | 33 |
No Investment | 3.5 | 1.7 | 2.2 | 15 | 2.8 | 54 |
This will show that as users move along the familiarity curve:
-
Brand awareness reduces uncertainty → higher initial motivation
-
Brand trust reduces perceived risk → lower cognitive load and hesitancy
-
Positive brand attitude sets expectations → UX experiences are interpreted more favorably
This mirrors well-known confirmation bias and familiarity heuristics in cognitive psychology: we notice and appreciate positives more in trusted, familiar contexts.
Validation method 3: Look for data externally
Airbnb stands out as one of few (former-) startups that focusses more on brand marketing than performance marketing.
You can also learn from products with weak and strong brands on Amazon, with the latter converting traffic significantly better.
This is the multiplier effect of brand awareness.
Product teams can and should prove this hypothesis within their own analytics stacks. By segmenting UX metrics across stages of brand relationship maturity, you can quantify how much brand awareness and trust influence core product outcomes.
If validated, this insight strengthens the case for brand awareness as a multiplier on your product’s performance and for integrating brand-building efforts into product strategy and interpreting UX data within its brand context.