Introduction: The PLG scoring paradigm shift
Traditional lead scoring was built for a different era. When software required extensive demos and long sales cycles, demographic data - company size, job title, industry - made sense as the primary qualification criteria. But product-led growth has changed what’s important.
For PLG companies, the question isn’t whether someone downloaded your content. It’s whether they’re actually using your product in ways that signal readiness to buy.
Understanding Product-Qualified Leads (PQLs)
Before diving into scoring mechanics, PLG companies need to understand what makes a lead “product-qualified.”
The three pillars of PQL definition
A true PQL satisfies three distinct criteria that work together to signal purchase readiness:
-
Value realization: The user has experienced your product’s core value through activation events and meaningful engagement.
-
Customer fit: They match your ideal customer profile in terms of company size, industry, role, or other demographic factors.
-
Buying intent signals: They’ve demonstrated behaviors suggesting readiness to purchase, such as viewing pricing, hitting usage limits, or requesting features.
All three criteria must be met. A user from a Fortune 500 company (great fit) who barely logs in (no value) isn’t a PQL. Similarly, a power user from a student email address (poor fit) shouldn’t consume sales resources.
How PQLs differ from MQLs
The distinction matters for resource allocation:
- MQLs are based on demographics and marketing engagement - downloading content, attending webinars, visiting your site
- PQLs are based on actual product behavior - feature usage, collaboration patterns, time-to-value metrics
Real-world impact: Slack’s explosive growth to a $27.7B acquisition was powered by usage-driven scoring that identified when teams sent thousands of messages and approached free plan limits.
Core usage metrics that predict conversion
PLG companies should track these high-signal behavioral indicators:
Activation and engagement metrics
Login frequency and consistency
- Daily active users convert at 3-4x the rate of weekly users.
- Track: DAU/MAU ratio, session frequency, consecutive login days
- Threshold example: 3+ logins in 7 days signals active evaluation
Time to value (TTV)
- How quickly users reach their “aha moment” of value realization
- Track: Days from signup to first key action completion
- Faster TTV (time to value) strongly correlates with conversion
Feature adoption depth
- Which features users adopt and in what sequence matters enormously.
- Track: Core feature usage, advanced feature exploration, feature breadth vs depth
- Example: Users who explore premium features show higher conversion intent than those who never venture beyond basics
Collaboration and expansion signals
Team invitation activity
- Multi-user collaboration indicates team adoption intent
- Track: Number of invites sent, acceptance rate, cross-departmental invitations
Data import and integration attempts
- Users who connect external tools or import data are making infrastructure investments. It’s a great signal.
- Track: Number of integrations connected, data volume imported, API usage
Intent and readiness indicators
Pricing page visits
- Clear hand-raising behavior that signals purchase consideration
- Track: Frequency of visits, time spent.
Usage limit approaches
- Users nearing free plan limits are experiencing enough value to need more
- Example: Slack’s 2,000 message threshold, storage apps hitting capacity limits
- Track: Percentage of quota used, feature gate encounters, upgrade prompts clicked
Support and sales interactions
- Direct requests for demos, pricing information, or “talk to sales” clicks
- Track: Support ticket topics, sales chat initiations, demo requests
Building your usage-based scoring model
Step 1: Establish your baseline
Start with data analysis, not assumptions:
- Calculate your overall lead-to-customer conversion rate as your benchmark
- Analyze which attributes and behaviors correlate with closed deals
- Identify conversion rates for specific behaviors (e.g., users who invite 3+ teammates convert at X%)
Step 2: Define scoring categories and weights
Structure your model around multiple dimensions:
Product engagement score (40-50% weight)
- Core feature usage frequency
- Feature adoption breadth and depth
- Session duration and recency
- Activation milestone completion
Collaboration score (20-30% weight)
- Team members invited
- Shared workspaces or projects created
- Cross-user activity and interactions
- Organizational spread indicators
Intent signals score (15-20% weight)
- Pricing page visits
- Upgrade attempt clicks
- Usage limit encounters
- Support inquiries about paid features
Fit score (10-15% weight)
- Company size alignment with ICP
- Industry match
- Geographic relevance
- Job title/role appropriateness
This is just an example. The exact weights depend on your business and product. For example, B2B collaboration tools might weight team invitations higher, while individual productivity tools might emphasize feature adoption.
Step 3: Assign point values based on data
Use actual conversion data to inform scoring:
- Compare close rates of each behavior against your overall baseline
- If your baseline conversion is 2% and users who integrate 2+ tools convert at 10%, assign proportionally higher points
- Example formula: (Behavior close rate ÷ Baseline close rate) × Base points = Behavior points
Example scoring framework:
- Created 3+ projects: 30 points (10% conversion vs 2% baseline = 5x multiplier)
- Invited 5+ team members: 40 points (12% conversion = 6x multiplier)
- Viewed pricing 3+ times: 25 points (8% conversion = 4x multiplier)
- Connected 2+ integrations: 35 points (11% conversion = 5.5x multiplier)
Step 4: Implement negative scoring
Equally important: subtract points for disqualifying behaviors.
- Generic email addresses (-20 points)
- No activity in 30 days (-5 points per week)
- Unsubscribed from communications (-15 points)
- Outside target geography (-25 points)
- Student or personal use indicators (-30 points)
Step 5: Set actionable thresholds
Define clear score ranges that trigger specific actions:
- 80-100 points: Hot PQL - immediate sales outreach with personalized messaging
- 60-79 points: Warm PQL - automated sales outreach, CS check-in
- 40-59 points: Engaged user - continued product-led nurture, targeted education
- Below 40: Self-serve user - automated onboarding flows, no sales touch
A low-fit, high-engagement user might be perfect for self-serve conversion. A high-fit, medium-engagement user might need sales assistance to get activated.
Advanced scoring considerations
Multiple scoring models for different motions
Mature PLG companies often maintain separate scores for distinct use cases:
- Free-to-paid conversion score: Optimized for individual or small team upgrades
- Team expansion score: Focused on adding seats to existing paid accounts
- Enterprise conversion score: Weighted toward organizational rollout signals
- Upsell/cross-sell score: Tracks feature adoption patterns suggesting need for higher tiers
Predictive scoring with machine learning
Once you have sufficient data, ML models can outperform manual scoring:
- Analyzes thousands of data points to identify patterns humans might miss
- Continuously learns and improves from new conversion data
- Provides probability scores rather than simple point totals
- Requires minimum sample size (typically 50+ conversions with both positive and negative examples)
Time-based decay
User behavior degrades in value over time:
- A login today is more valuable than a login 30 days ago
- Implement point decay: reduce scores by X% per week of inactivity
- Example: 5-point login activity decays by 25% per week, reaching zero after 4 weeks
Account-level vs. individual scoring
For B2B PLG, aggregate individual signals to the account level:
- Track both individual PQLs and Product-Qualified Accounts (PQAs)
- Account signals: multiple departments using product, executive-level signups, company-wide integrations
- A single power user might indicate opportunity, but multiple users across departments signals enterprise readiness
Sales and CS enablement
Scores mean nothing without proper activation:
- Create routing rules that automatically assign high-scoring PQLs to sales reps
- Provide context: Why did this lead score high? What specific behaviors triggered the qualification?
- Set up Slack notifications when leads cross critical thresholds
- Generate daily prioritized lead lists for sales based on score and recency
Common pitfalls to avoid
Overcomplicating too early
- Start simple with 5-7 key behaviors, not 50
- Add complexity only when you have data to support it
Scoring without fit
- High engagement from non-ICP accounts wastes sales resources
- Always combine usage signals with demographic qualification
Not accounting for buyer vs. user disconnect
- In enterprise scenarios, the end user and decision-maker are often different people
- Track both usage patterns and decision-maker engagement
FAQs
How is usage-based lead scoring different from traditional lead scoring?
Traditional lead scoring assigns points based on demographic data (job title, company size) and marketing engagement (content downloads, email opens). Usage-based scoring prioritizes actual product behavior - feature adoption, collaboration patterns, integration activity, and time-in-product. This shift reflects how PLG buyers make decisions: by experiencing value firsthand rather than consuming marketing content.
What are the most important product usage signals to track for lead scoring?
The highest-value signals fall into three categories: activation metrics (login frequency, feature adoption, time-to-value), collaboration indicators (team invites, shared projects, cross-user activity), and intent signals (pricing page visits, usage limit encounters, upgrade attempts). Companies should weight behaviors based on their correlation with actual conversions. For example, Slack found that workspaces sending 2,000+ messages and creating multiple channels were significantly more likely to convert, making these behaviors high-value scoring triggers.
How do I calculate point values for different user behaviors?
Start by establishing your baseline conversion rate (total customers ÷ total leads). Then analyze the conversion rate for specific behaviors. If your baseline is 2% and users who integrate 3+ tools convert at 10%, that behavior is 5x more valuable - so assign it 5x the base points. For example, if your base point unit is 10, that behavior would receive 50 points. This data-driven approach ensures your scoring reflects actual predictive value rather than assumptions.
Should I use the same lead score for free trial conversions and enterprise expansion opportunities?
No - mature PLG companies typically maintain separate scoring models for different motions. Free-to-paid conversion scores emphasize individual usage depth and feature adoption. Enterprise expansion scores weight account-level signals like multiple departments using the product, executive signups, and cross-team collaboration. Team expansion scores focus on collaboration metrics. Each motion has different qualifying behaviors, so a single score rarely optimizes for all scenarios.
How do I prevent false positives like students or competitors from scoring high?
Implement negative scoring and fit criteria. Deduct significant points for non-business email domains, geographic mismatches with your ICP, student identifiers, or competitive company names. Additionally, don’t rely on usage alone - always combine engagement scores with fit scores. A user with perfect engagement but terrible fit should be routed to self-serve channels rather than consuming sales resources. The goal is high engagement AND good fit, not just one or the other.
Sources
- The 5 Pillars for Product-Led Growth Using Product-Qualified Leads
- How to Define PQLs
- The Definitive PQL Guide Part 1
- PLG Signals and Playbooks for Customer Journey
- B2B Lead Scoring
- How to Increase Product-Qualified Leads
- 12 Product-Led Growth Metrics
- Product-Led Growth Metrics
- Product-Qualified Leads
- Product-Qualified Lead Definition
- Lead Scoring Model
- How to Calculate Lead Score
- Lead Scoring
- Lead Scoring Basics
- PLG Strategy: Use Predictive Lead Scoring to Generate PQLs
- KPIs for Measuring User Adoption and Engagement
- How to Calculate Lead Score
- Product-Led Sales
- Lead Scoring Model
- Product-Qualified Leads (PQLs): A Use Case in CS-Product Collaboration