Customer Experience 22 min read

NPS vs CSAT vs CES: The Definitive Guide to Customer Experience Metrics in 2026

Master the three essential CX metrics—NPS, CSAT, and CES. Learn when to use each, industry benchmarks, calculation methods, and how top companies combine them to predict loyalty and drive revenue growth.

Marcus Chen Director of CX Strategy

Every CX leader eventually faces the same question: Should we measure NPS, CSAT, CES—or all three?

The answer isn’t as simple as picking the “best” metric. Each measures something fundamentally different about your customer relationships. Get the combination wrong, and you’ll optimize for the wrong outcomes. Get it right, and you’ll predict churn before it happens, identify friction before customers complain, and drive growth with precision.

According to Qualtrics 2025 research, consumers gave satisfaction ratings of 4-5 stars for 76% of their recent experiences—yet many companies still can’t predict which customers will leave. The gap between measuring satisfaction and predicting behavior is where understanding these three metrics becomes essential.

This guide breaks down everything you need to know: how each metric works, when to deploy them, industry benchmarks, and the strategic framework that top-performing companies use to combine them effectively.

The Three Pillars of CX Measurement

Before diving into comparisons, let’s establish what each metric actually measures—because they’re often confused for each other.

MetricNPSCSATCES
Full NameNet Promoter ScoreCustomer Satisfaction ScoreCustomer Effort Score
MeasuresLoyalty & advocacyInteraction satisfactionEase of experience
Core Question”How likely to recommend?""How satisfied were you?""How easy was this?”
Scale0-101-5 or 1-71-7 (CES 2.0)
TimeframeLong-term relationshipSpecific momentTask completion

Here’s the critical insight most teams miss: these metrics aren’t competing alternatives—they’re complementary layers of understanding. NPS tells you whether customers will stay loyal. CSAT tells you how they felt about a specific moment. CES tells you why they might leave (friction creates disloyalty faster than dissatisfaction).


Net Promoter Score (NPS): The Loyalty Predictor

What NPS Measures

Developed by Bain & Company in 2003, NPS measures customer loyalty through a single question:

“How likely are you to recommend [Company] to a friend or colleague?”

Customers respond on a 0-10 scale, then get categorized:

Score RangeCategoryImpact on NPS
0-6DetractorsSubtracted from score
7-8PassivesNot counted
9-10PromotersAdded to score

How to Calculate NPS

The formula is straightforward:

NPS = % Promoters − % Detractors

For example, if 100 customers respond:

  • 50 score 9-10 (Promoters) = 50%
  • 30 score 7-8 (Passives) = Not counted
  • 20 score 0-6 (Detractors) = 20%

NPS = 50% − 20% = +30

NPS ranges from -100 (everyone is a detractor) to +100 (everyone is a promoter).

What Makes a Good NPS?

2025 Global NPS Benchmarks:

  • Global Median: 42 (across all industries)
  • B2C Average: 49
  • B2B Average: 38

According to NPS Prism by Bain & Company, here’s how NPS breaks down by performance tier:

NPS RangeInterpretationTypical Scenario
70+ExcellentIndustry leaders with devoted customers
50-69GreatStrong competitive advantage
30-49GoodRoom for improvement but healthy
0-29Needs workSignificant loyalty challenges
Below 0CriticalMore detractors than promoters

NPS Benchmarks by Industry (2025)

IndustryMedian NPSTop Performer
Technology & Services66Apple, Adobe
Retail & E-commerce55Costco, Amazon
IT Services55
Healthcare50+
Hotels & Hospitality44Ritz-Carlton
Banking & Credit Unions41USAA
Automotive41Tesla, Toyota
Insurance35
Telecommunications24T-Mobile

Sources: Survicate, Retently, Delighted

The Business Impact of NPS

Research from CustomerGauge reveals the revenue connection:

  • 10+ point NPS increase → 3.2% increase in upsell revenue
  • 7-point NPS increase → 1% revenue growth (London School of Economics)
  • NPS explains 20-60% of variation in organic growth rates among competitors (Bain & Company)

However, there’s an important caveat: Gainsight research found that NPS doesn’t always correlate to churn or renewal—only companies in the upper quartile of NPS show 5-10% higher retention. NPS is a directional indicator, not a prediction engine.

When to Use NPS

Ideal for:

  • Quarterly or biannual brand health tracking
  • Competitive benchmarking
  • Board-level reporting
  • Strategic decision-making
  • Multi-respondent B2B feedback (entire buying committee)

Not ideal for:

  • Immediate transactional feedback
  • Identifying specific friction points
  • Predicting short-term churn
  • Process optimization

Customer Satisfaction Score (CSAT): The Moment Meter

What CSAT Measures

CSAT captures how satisfied customers feel immediately after a specific interaction—a purchase, support call, onboarding session, or product delivery.

The standard question:

“How satisfied were you with [specific experience]?”

Scales vary, but 1-5 and 1-7 are most common. The key is specificity: CSAT works best when tied to a defined moment, not general sentiment.

How to Calculate CSAT

Two primary methods exist:

Method 1: Top-Box Percentage (Most Common)

CSAT % = (Satisfied + Very Satisfied responses ÷ Total responses) × 100

Using a 5-point scale where 4 = Satisfied and 5 = Very Satisfied:

If 100 customers respond: 45 score “5”, 30 score “4”, 15 score “3”, 7 score “2”, 3 score “1”

CSAT = (45 + 30) ÷ 100 × 100 = 75%

Method 2: Average Score

Simply average all numeric responses. With the same data: (45×5 + 30×4 + 15×3 + 7×2 + 3×1) ÷ 100 = 4.07/5

What Makes a Good CSAT?

CSAT ScoreRating
Below 60%Poor
60-74%Fair
75-84%Good
85%+Excellent

CSAT Benchmarks by Industry (2025)

IndustryAverage CSATTop Performer Threshold
Consulting84%90%+
Hotels & Hospitality82%88%+
E-commerce/Retail82%88%+
Banking & Financial Services79%85%+
Grocery Retail78%84%+
B2B Software/SaaS77%83%+
Insurance70%78%+
Utilities65%75%+
Telecommunications58%70%+

Sources: Fullview, 1Flow, QuestionPro

When to Use CSAT

Ideal for:

  • Post-purchase surveys (within 24 hours)
  • Support ticket resolution feedback
  • Onboarding experience evaluation
  • Product delivery confirmation
  • Feature usage satisfaction
  • Any specific touchpoint measurement

Not ideal for:

  • Overall brand loyalty assessment
  • Long-term relationship health
  • Predicting advocacy behavior
  • Cross-company benchmarking (scales vary)

Customer Effort Score (CES): The Friction Finder

What CES Measures

CES emerged from Gartner research showing that reducing customer effort is more impactful than exceeding expectations. It measures how easy it was for customers to accomplish a specific task.

The modern CES 2.0 question:

“[Company] made it easy for me to handle my issue.”

Customers respond on a 1-7 scale from “Strongly Disagree” to “Strongly Agree.”

The Evolution: CES 1.0 to CES 2.0

AspectCES 1.0 (Original)CES 2.0 (Current)
Question”How much effort did you personally have to put forth?""[Company] made it easy for me to handle my issue.”
Scale1-51-7
FocusEffort-focused wordingEase-focused wording
InterpretationLower scores = betterHigher scores = better

How to Calculate CES

CES 2.0 Calculation:

CES % = (Responses scoring 5, 6, or 7 ÷ Total responses) × 100

If 100 customers respond:

  • 35 score “7” (Strongly Agree)
  • 25 score “6” (Agree)
  • 20 score “5” (Somewhat Agree)
  • 12 score “4” (Neutral)
  • 8 score 1-3 (Disagree range)

CES = (35 + 25 + 20) ÷ 100 × 100 = 80%

Alternatively, calculate the average score (5.4 out of 7 in this example).

Why CES Matters: The Research

Here’s why CES deserves more attention than it typically receives:

Gartner Research Findings:

StatisticFinding
40%More accurate at predicting customer loyalty than satisfaction metrics
2xMore predictive of future behavior than NPS alone
96%Disloyalty rate among customers with high-effort experiences

The Effort-Loyalty Connection

The data from CEB (now Gartner) is striking:

Effort LevelRepeat Purchase LikelihoodDisloyalty Rate
Low effort94%9%
High effort4%96%

Operational impact of reducing effort:

  • 40% fewer repeat support calls
  • 50% fewer escalations
  • 54% less channel switching
  • 65-point higher NPS for low-effort vs. high-effort companies

CES Benchmarks

Unlike NPS and CSAT, CES lacks standardized industry benchmarks. This is partly because CES is relatively newer and partly because “effort” is highly context-dependent.

CES ScoreInterpretation (7-point scale)
6.0+Excellent—effortless experience
5.0-5.9Good—minimal friction
4.0-4.9Adequate—some room for improvement
Below 4.0Problematic—significant friction exists

Sources: SurveySensum, Sobot

The best approach: establish your internal baseline and track improvement over time rather than comparing against external benchmarks.

When to Use CES

Ideal for:

  • Post-support interaction surveys
  • Onboarding completion feedback
  • Self-service experience evaluation
  • Process optimization projects
  • Digital platform usability testing
  • Issue resolution assessment

Not ideal for:

  • Brand perception measurement
  • Competitive benchmarking
  • Overall relationship health
  • Long-term loyalty prediction (use NPS)

Head-to-Head Comparison: NPS vs CSAT vs CES

Now let’s compare these metrics across the dimensions that matter for implementation.

Measurement Focus

DimensionNPSCSATCES
Core QuestionLikelihood to recommendSatisfaction levelEase of experience
What It RevealsFuture advocacyCurrent sentimentProcess friction
Time HorizonRelationship-levelMoment-specificTask-specific
Best TimingQuarterly/biannualImmediately afterAfter task completion
BenchmarkingStrong externalIndustry-specificInternal only
Loyalty PredictionModerateWeakStrong (40% better)

Strengths and Limitations Summary

NPS Strengths & Limitations

StrengthsLimitations
Simple, universally understoodDoesn’t explain “why”
Easy competitive benchmarkingSusceptible to score gaming
Correlates with revenue growthIgnores passive segment (7-8)
High-level strategic indicatorWeak churn correlation alone

CSAT Strengths & Limitations

StrengthsLimitations
Immediate, actionable feedbackMoment-specific only
High completion ratesResponse bias to extremes
Pinpoints specific issuesDoesn’t predict loyalty
Easy to deploy at scaleScale variations complicate benchmarking

CES Strengths & Limitations

StrengthsLimitations
40% better loyalty predictionTask-specific only
Directly actionable (reduce friction)No external benchmarks
Clear ROI (fewer escalations)External factors can skew results
Identifies process problemsDoesn’t measure satisfaction

The Strategic Framework: Using All Three Together

The most effective CX programs don’t choose between metrics—they layer them strategically.

The Customer Journey Measurement Map

Journey StageRecommended MetricsKey Questions
Awareness & ConsiderationNo metrics yet—customer hasn’t engaged
Purchase / SignupCSAT, CESWas checkout easy? Are they satisfied?
OnboardingCES, CSATWas setup effortless? Training quality?
Support InteractionsCES, CSATWas issue resolved easily? Agent helpful?
Ongoing Usage (3+ months)NPSOverall relationship health & loyalty
Renewal / ExpansionNPS, CSATWill they renew? Upgrade? Refer others?

The Multi-Metric Measurement Stack

According to Retently research, 49% of NPS users measure at least one additional metric. Here’s how leading companies structure their approach:

LayerMetricFrequencyPurpose
StrategicNPSQuarterlyOverall loyalty & growth prediction
TacticalCSATPer interactionTouchpoint-specific satisfaction
OperationalCESPer taskProcess friction identification

Real-World Multi-Metric Examples

E-commerce Platform:

  • CSAT after purchase → Product quality satisfaction (high)
  • CES for checkout → Revealed friction in payment process
  • Result: Streamlined checkout → NPS improved 12 points, repeat visits up 23%

SaaS Company:

  • CSAT after onboarding → Training quality feedback
  • CES for account setup → Ease of first use
  • NPS at 90-day mark → Overall platform loyalty
  • Result: Targeted improvements at each stage reduced time-to-value by 40%

B2B Service Provider:

  • CES after every support interaction → Identified knowledge base gaps
  • CSAT for project milestones → Client satisfaction tracking
  • NPS quarterly → Account health monitoring
  • Result: 20% reduction in escalations, 15% improvement in renewal rates

Which Metric Should You Choose?

Use this decision framework to identify the right metric for your specific needs:

Your GoalRecommended MetricImplementation
Track overall brand loyalty and predict growthNPSQuarterly with follow-up questions for context
Understand satisfaction with specific touchpointsCSATImmediately after each key interaction
Identify and eliminate friction in processesCESAfter task completion, especially support
Build a comprehensive CX programAll threeCES + CSAT at touchpoints, NPS quarterly

Industry-Specific Recommendations

IndustryPrimary MetricSecondary MetricReasoning
SaaSCES + NPSCSAT for supportEffort in self-service critical; NPS predicts renewals
E-commerceCSATCES for checkoutTransaction satisfaction immediate; ease drives conversion
Professional ServicesNPSCSAT per projectRelationships matter; project satisfaction informs renewals
HealthcareCSATCES for administrativePatient satisfaction regulated; ease of scheduling impacts loyalty
Financial ServicesNPS + CESCSAT for transactionsTrust/loyalty paramount; friction in banking highly punished
TelecomCESNPS quarterlyIndustry known for friction; reducing effort is competitive edge

Implementation Best Practices

Survey Timing Matters

MetricWhen to SendNotes
NPSAfter 3+ months of consistent usageQuarterly or biannually. Adobe targets “regular and champion users” only.
CSATWithin hours of the interactionSend immediately while experience is fresh. Response rates drop 50%+ after 24 hours.
CESImmediately after task completionRight after checkout, support resolution, or feature use. Effort perception fades quickly.

Survey Frequency Guidelines

MetricMinimum FrequencyMaximum FrequencySurvey Fatigue Risk
NPSAnnuallyQuarterlyLow (if spaced properly)
CSATPer key interaction1x per week per customerMedium (limit touchpoints surveyed)
CESPer task type1x per task type per monthLow (highly contextual)

Follow-Up Questions That Matter

Don’t rely on scores alone. Add context-gathering questions:

After NPS:

  • “What’s the primary reason for your score?”
  • “What would we need to do to earn a higher score?”

After CSAT:

  • “What did we do well?” (for high scores)
  • “What could we have done better?” (for low scores)

After CES:

  • “What made this difficult?” (for low-effort scores)
  • “What step took the most time?” (for process improvement)

Common Mistakes to Avoid

Survey Design Mistakes

MistakeImpactSolution
Benchmarking across different companies without contextMisleading conclusionsUse industry-specific and internal benchmarks
Sending CSAT surveys too lateLow response rates, stale memoriesTrigger within hours of interaction
Expanding surveys beyond core questionsAbandoned surveys, survey fatigueKeep to 2-3 questions maximum
Inconsistent question phrasingIncomparable data over timeStandardize all question text

Analysis Mistakes

MistakeImpactSolution
Not using cohort analysisMiss patterns in customer segmentsCompare new vs. long-term, by persona
Relying on single metricsIncomplete pictureLayer NPS, CSAT, CES together
Ignoring the “passive” NPS segmentMiss dissatisfaction signalsAnalyze 7-8 scorers separately
No follow-up on scoresLack of actionable insightsAlways include open-ended questions

Implementation Mistakes

MistakeImpactSolution
Incentivizing staff on NPS without addressing root causesScore gaming, no real improvementTie incentives to action, not just scores
Surveying every single interactionSurvey fatigue, declining response ratesImplement sampling and throttling
Not closing the loopCustomers feel ignoredRespond to detractors within 24-48 hours
Treating CES as overall satisfactionMisleading conclusionsUse CES for process, NPS/CSAT for sentiment

The Shift Toward Predictive Analytics

According to Gartner 2025 research, CX measurement is evolving rapidly:

  • 41% of large enterprises now use AI to improve CX collaboration
  • 39% are adopting predictive CX tools
  • Companies using predictive models report 20% increase in retention rates
  • Global Predictive Analytics for Customer Insights market: $18.89B in 2024, projected 28.3% CAGR through 2030

NPS’s Changing Role

Despite predictions that NPS would be “abandoned by 75% of companies by 2025” (Gartner 2021), the metric persists—but with declining priority:

  • NPS fell from 2nd to 8th in CX metric priority (2024 data)
  • Only 23% of U.S. enterprise CX leaders actively use NPS (TELUS Digital/Statista 2025)
  • Reason: Shift toward more predictive, multi-metric approaches

CES Gaining Prominence

CES’s research-backed predictive power is driving increased adoption:

  • 1.8x more predictive of loyalty than CSAT
  • 2.0x more predictive than NPS
  • CES 2.0 becoming the standard with improved wording and 7-point scale
  • Recognition: Effort reduction is more reliable loyalty driver than satisfaction

The AI Integration Wave

Modern CX platforms are incorporating:

  • Automated sentiment analysis of open-ended responses
  • Theme detection across thousands of verbatim comments
  • Predictive churn scoring based on metric patterns
  • Real-time alerting for at-risk customers
  • Closed-loop automation for detractor follow-up

Frequently Asked Questions

Which metric is most important: NPS, CSAT, or CES?

There’s no single “most important” metric—each measures different aspects of customer experience. CES is 40% more predictive of loyalty according to Gartner research, making it valuable for identifying churn risk. NPS provides the best competitive benchmarking and correlates with revenue growth. CSAT offers the most actionable immediate feedback. The best programs use all three strategically.

What’s a good NPS score in 2026?

The global median NPS is 42. A score above 50 is considered excellent for most industries, while 70+ indicates world-class performance. However, benchmarks vary significantly by industry—telecommunications averages 24, while technology leads at 66. Always compare against your specific industry.

How often should I survey customers?

NPS: Quarterly or biannually for relationship-level tracking. CSAT: After every key interaction, but limit to 1 survey per customer per week to avoid fatigue. CES: Immediately after specific tasks, sampled to avoid over-surveying. Implement throttling rules to ensure no customer receives more than 2-3 surveys monthly across all channels.

Can I use just one metric if I’m just starting out?

Yes, but choose wisely. If you’re a service business, start with CES to identify and eliminate friction. If you’re focused on growth and advocacy, start with NPS. If you’re optimizing specific touchpoints, start with CSAT. Plan to add additional metrics within 6-12 months as your program matures.

How do NPS, CSAT, and CES correlate with revenue?

NPS: 7-point increase correlates with 1% revenue growth (London School of Economics). 10+ point increase correlates with 3.2% upsell revenue increase. CES: Low-effort experiences drive 94% repeat purchase likelihood vs. 4% for high-effort. CSAT: Correlates with operational efficiency but doesn’t reliably predict revenue or loyalty on its own.

Should I include follow-up questions or just the core metric?

Always include follow-up questions. A score without context is actionable only at the aggregate level. Add one open-ended question (“What’s the primary reason for your score?”) to understand the “why” behind the number. This doubles survey length but dramatically increases actionability.


The Bottom Line

NPS, CSAT, and CES aren’t competing frameworks—they’re complementary tools for understanding different dimensions of customer experience.

  • NPS tells you whether customers will advocate for your brand
  • CSAT tells you how they felt about a specific moment
  • CES tells you where friction will drive them away

The companies seeing the best results use all three strategically: CES and CSAT at touchpoints to identify and fix issues quickly, NPS quarterly to monitor overall loyalty and predict growth.

The key insight from Gartner’s research bears repeating: reducing customer effort is 40% more effective at building loyalty than exceeding satisfaction expectations. In a world where customers expect everything to be easy, effort is the metric that predicts behavior.


Take Your CX Measurement to the Next Level

Ready to move beyond basic surveys? ActionXM combines NPS, CSAT, and CES into a unified platform with AI-powered insights that identify patterns humans miss. Automated closed-loop workflows ensure every detractor gets attention, every friction point gets investigated, and every promoter becomes an advocate.

See the difference:

Have questions about measuring customer experience? Contact us—we help organizations of all sizes build world-class CX programs.


Sources

Ready to Transform Your Experience Program?

See how ActionXM can help you capture, analyze, and act on feedback at scale.