Tracking Performance

Understanding how your nurturing flows perform helps you optimize messaging, timing, and targeting. Without performance data, you’re guessing. With it, you can see what’s working, what’s not, and where to focus improvement efforts.

This article covers the metrics available for email nurturing, how to interpret them, and how to use data to improve your flows.

Why Performance Tracking Matters

Nurturing is an investment. You spend time creating flows, writing content, and managing sequences. Performance tracking tells you whether that investment is paying off.

Good tracking answers questions like:

  • Are people opening my emails?
  • Are they clicking links and engaging?
  • Are too many people unsubscribing?
  • Which flows perform best?
  • Which emails in a sequence work well and which fall flat?
  • Is nurturing actually helping win back deals?

Without answers, you’re running blind. With answers, you can continuously improve.

Accessing Flow Performance

Each flow has its own performance dashboard.

  1. Go to Nurturing > Flows
  2. Click on a flow name to open its details
  3. Navigate to the Performance tab

[Screenshot: Flow performance dashboard with metrics cards and charts]

The performance tab shows metrics for that specific flow. You can also see aggregate nurturing metrics in your main Rizer reports.

Audience Metrics

These metrics tell you about flow membership — who’s in the flow and what’s happening to them.

Total Entered

The number of contacts who have entered this flow since it was activated.

What it tells you: The reach of your flow. A high number means many contacts have matched your criteria and entered the sequence.

What to watch for:

  • Zero or very low entries might mean your audience criteria are too narrow or no matching deals have been recycled
  • Rapid growth means your targeting matches common patterns in your recycled deals
  • Sudden drops might indicate criteria problems or changes in your recycling patterns

Currently Active

Contacts currently in the flow — they’ve entered but haven’t completed, unsubscribed, or been removed.

What it tells you: How many people are mid-sequence right now.

What to watch for:

  • This number fluctuates as people enter and exit
  • A very high number relative to entries might mean your flow is long or people aren’t progressing
  • A very low number might mean people are dropping off quickly

Completed

Contacts who received all emails in the sequence and finished the flow.

What it tells you: How many people made it through the entire nurturing journey.

What to watch for:

  • Completion rate (completed / total entered) indicates whether your flow is the right length and whether content keeps people engaged
  • Low completion might mean too many steps, poor content, or excessive unsubscribes

Removed

Contacts manually removed from the flow by someone on your team.

What it tells you: How often human intervention pulls people out of automated nurturing.

What to watch for:

  • High removal numbers might indicate your criteria are capturing people who shouldn’t be nurtured
  • Some removal is normal — situations change and manual intervention is sometimes needed

Unsubscribed

Contacts who clicked the unsubscribe link in any email from this flow.

What it tells you: How many people actively opted out of your nurturing.

What to watch for:

  • Unsubscribe rate (unsubscribed / total entered) should stay low — under 1-2% is healthy
  • High unsubscribes signal problems with content relevance, frequency, or targeting
  • Sudden spikes might indicate a specific email causing issues

Email Metrics

These metrics measure how recipients interact with your emails.

Emails Sent

Total number of emails delivered across all steps in the flow.

What it tells you: The volume of email activity from this flow.

How it’s calculated: If 100 contacts each receive 5 emails, that’s 500 emails sent.

Open Rate

Percentage of sent emails that were opened.

What it tells you: Whether your subject lines and sender names are compelling enough to get people to open.

Industry benchmarks: B2B nurturing typically sees 15-25% open rates. Below 15% is concerning. Above 25% is strong.

What affects open rates:

  • Subject line quality — Is it compelling? Does it create curiosity?
  • Sender name — Do recipients recognize and trust the sender?
  • Send timing — Are you reaching people when they check email?
  • Deliverability — Are emails landing in inbox or spam?
  • List quality — Are these real, engaged contacts?

Caveats: Open tracking isn’t perfect. Some email clients block tracking pixels, making opens undercounted. Apple’s Mail Privacy Protection pre-loads images, potentially inflating opens. Use open rate as a directional indicator, not an exact measure.

Click Rate

Percentage of sent emails where recipients clicked at least one link.

What it tells you: Whether your content and calls-to-action are compelling enough to drive engagement.

Industry benchmarks: B2B nurturing typically sees 2-5% click rates. Below 2% suggests content or CTA problems. Above 5% is strong.

What affects click rates:

  • Content relevance — Does the email address something the recipient cares about?
  • Call-to-action clarity — Is it obvious what you want them to do?
  • Link placement — Are links visible and easy to click?
  • Value proposition — Is there a clear benefit to clicking?
  • Mobile optimization — Do links work well on phones?

Click rate vs. click-to-open rate: Click rate is clicks divided by sends. Click-to-open rate is clicks divided by opens. Both are useful. Click-to-open tells you how engaging content is for people who actually opened.

Unsubscribe Rate

Percentage of sent emails that resulted in an unsubscribe.

What it tells you: Whether your nurturing is annoying people enough that they opt out.

Target: Keep this under 0.5%. Above 1% is a warning sign. Above 2% is a serious problem.

What causes high unsubscribes:

  • Emails too frequent
  • Content not relevant to recipients
  • Wrong audience targeting
  • Overly aggressive sales messaging
  • Poor quality or spammy content

Bounce Rate

Percentage of emails that failed to deliver.

What it tells you: The quality of your contact data and any deliverability issues.

Target: Keep this under 2%. Above 5% indicates serious problems.

Types of bounces:

  • Hard bounces — Permanent failures. Email address doesn’t exist, domain is invalid. These contacts should be removed.
  • Soft bounces — Temporary failures. Mailbox full, server temporarily unavailable. These might succeed on retry.

What causes high bounces:

  • Outdated contact information in HubSpot
  • Contacts who’ve left companies
  • Typos in email addresses
  • Domain or deliverability issues on your end

Step-Level Performance

Beyond flow-level metrics, you can see how individual steps perform.

Viewing Step Performance

In the flow’s Performance tab, scroll to the Steps section or click into step details.

[Screenshot: Step performance table showing metrics for each email in the sequence]

Each step shows:

  • Emails sent for that step
  • Open rate for that step
  • Click rate for that step
  • Unsubscribe rate for that step

Why Step-Level Data Matters

Step-level performance reveals which emails work and which don’t.

Identify strong performers: Which emails get the best opens and clicks? What’s different about them? Can you apply those lessons elsewhere?

Find weak spots: Which emails underperform? Are they causing unsubscribes? Do they need revision or removal?

Spot drop-off points: Do metrics decline as the sequence progresses? That might be natural (engagement decreases over time) or might indicate a specific email causing problems.

Common Step Patterns

Declining engagement: Opens and clicks decrease through the sequence. Some decline is normal — early emails catch the most interested people. Steep decline suggests later content isn’t holding interest.

Mid-sequence dip: A specific step has much worse metrics than those around it. That email probably needs revision.

Unsubscribe spike: One step has dramatically higher unsubscribes. Something about that email is triggering opt-outs. Review content, tone, and call-to-action.

Strong finish: Later steps perform well. Your sequence builds momentum and delivers value throughout.

Re-engagement Metrics

Ultimately, nurturing should help win back deals. These metrics connect nurturing to business outcomes.

Re-engaged Contacts

Contacts who moved from In recycling to Ready for callback while in the flow.

What it tells you: Whether nurtured contacts are progressing toward re-engagement.

Context: This isn’t solely caused by nurturing — callback dates arrive regardless of nurturing. But nurturing might accelerate readiness or improve receptiveness.

Deals Created

New deals or leads created in HubSpot for contacts who were in this flow.

What it tells you: Whether nurtured contacts are becoming active opportunities.

What to watch for: Compare deal creation rates between nurtured and non-nurtured contacts. Is nurturing making a difference?

Won After Nurturing

Deals that closed as won for contacts who were nurtured by this flow.

What it tells you: The bottom-line impact. Did nurturing help win deals?

Context: Attribution is tricky. A contact might have won even without nurturing. But tracking this helps you see whether nurtured contacts are converting.

Revenue Impact

Total value of deals won for contacts who were in this flow.

What it tells you: The dollar value of wins from nurtured contacts.

How to use it: Compare against the cost and effort of nurturing. Is the ROI positive? This helps justify continued investment in nurturing.

Benchmarking Your Performance

Raw numbers are hard to evaluate without context. Here’s how to benchmark.

Industry Benchmarks

General B2B email benchmarks:

MetricConcerningAcceptableStrong
Open rateBelow 15%15-25%Above 25%
Click rateBelow 2%2-5%Above 5%
Unsubscribe rateAbove 1%0.5-1%Below 0.5%
Bounce rateAbove 5%2-5%Below 2%

These are rough guidelines. Your specific industry, audience, and content might perform differently.

Internal Benchmarks

Compare flows against each other:

  • Which flows have the best engagement?
  • Which audiences respond most positively?
  • Which step timing works best?

Your own data is the most relevant benchmark. It accounts for your specific situation.

Trend Benchmarks

Compare current performance to past performance:

  • Are open rates improving or declining?
  • Is engagement getting better as you refine content?
  • Are unsubscribes trending up (bad) or down (good)?

Trends tell you whether you’re moving in the right direction.

Identifying and Diagnosing Issues

Performance data helps you spot and fix problems.

Low Open Rates

Possible causes:

  • Weak subject lines that don’t create interest
  • Sender name not recognized or trusted
  • Emails landing in spam
  • Sending at bad times
  • List quality issues (old or inactive contacts)

How to investigate:

  • Test different subject line styles
  • Try different sender names
  • Check deliverability (test emails to yourself)
  • Experiment with send timing
  • Review whether contacts are genuinely appropriate for nurturing

Quick fixes:

  • Rewrite subject lines to be more compelling
  • Use a more recognizable sender name
  • Verify DNS records are correct for deliverability

Low Click Rates

Possible causes:

  • Content not relevant to the audience
  • Weak or unclear calls-to-action
  • Links buried or hard to find
  • Value proposition not compelling
  • Mobile experience is poor

How to investigate:

  • Review content relevance to audience criteria
  • Check CTAs — are they clear and prominent?
  • Test emails on mobile devices
  • Get feedback from colleagues on content quality

Quick fixes:

  • Make CTAs more prominent (buttons instead of text links)
  • Ensure above-the-fold content hooks interest
  • Simplify — fewer links, clearer direction

High Unsubscribe Rates

Possible causes:

  • Emails too frequent
  • Content doesn’t match what recipients expect
  • Wrong people in the audience
  • Overly salesy or pushy tone
  • Content is low quality or feels like spam

How to investigate:

  • Review audience criteria — are these the right people?
  • Read your content as a recipient would — is it valuable?
  • Check frequency — are you emailing too often?
  • Look at which specific emails cause unsubscribes

Quick fixes:

  • Reduce email frequency
  • Refine audience targeting to better-fit contacts
  • Revise content to provide more value, less pitch

High Bounce Rates

Possible causes:

  • Old contact data in HubSpot
  • Contacts who’ve left their companies
  • Invalid email addresses
  • Domain or deliverability issues

How to investigate:

  • Review bounced addresses — are they clearly invalid?
  • Check how old recycled deals are — older deals have more stale contacts
  • Verify your sending domain is properly configured

Quick fixes:

  • Clean up obviously bad addresses
  • Consider excluding very old deals from nurturing
  • Re-verify domain DNS records

Declining Engagement Through Sequence

Possible causes:

  • Natural decay — early emails get more attention
  • Later content is less relevant or compelling
  • Sequence is too long
  • Contacts are losing interest

How to investigate:

  • Review later emails — are they as strong as early ones?
  • Check sequence length — is it appropriate for your audience?
  • Look at where the steepest drop-off occurs

Quick fixes:

  • Strengthen later content
  • Consider shortening the sequence
  • Front-load your best content

A/B Testing (Manual Approach)

Rizer doesn’t have built-in A/B testing, but you can test manually.

How to Test

  1. Create two similar flows with one variable different
  2. Split your audience between them (using different criteria or random assignment)
  3. Run both flows simultaneously
  4. Compare performance after sufficient volume (at least 100 sends per variation)
  5. Keep the winner, pause the loser

What to Test

Subject lines: Same content, different subject lines. Which gets better opens?

Email length: Short and punchy vs. detailed and thorough. Which gets better engagement?

CTA style: Button vs. text link. Direct ask vs. soft suggestion. Which gets more clicks?

Send timing: Different intervals between steps. Which pacing works better?

Sender name: Personal name vs. company name. Which gets better response?

Content angle: Educational vs. promotional. Which resonates more?

Testing Best Practices

Test one variable at a time. If you change subject line and content and timing, you won’t know what caused the difference.

Wait for statistical significance. Small samples produce unreliable results. Wait until you have enough data.

Document what you learn. Keep notes on what you tested and what worked. Build institutional knowledge.

Apply learnings broadly. When you find something that works, apply it to other flows.

Building a Performance Review Routine

Regular review keeps nurturing effective.

Weekly Check

Quick scan — 5-10 minutes:

  • Any flows with obvious problems? (Zero sends, spike in unsubscribes)
  • Are active flows sending as expected?
  • Any bounces or deliverability issues?

Catch problems early before they affect many contacts.

Monthly Review

Deeper look — 30-60 minutes:

  • Review metrics for all active flows
  • Compare to previous month — improving or declining?
  • Identify underperforming flows or steps
  • Plan content updates or optimizations

This is when you make improvements based on data.

Quarterly Strategy Review

Big picture — 1-2 hours:

  • Is nurturing contributing to recycling success?
  • Which flows deliver the best ROI?
  • Should you create new flows for underserved segments?
  • Should you retire flows that aren’t working?
  • What have you learned that should inform future flows?

This is when you step back and assess whether your nurturing strategy is working.

Reporting to Stakeholders

Sometimes you need to share nurturing performance with others.

What to Include

Executive summary: Is nurturing working? One paragraph with key takeaways.

Key metrics: Open rate, click rate, unsubscribe rate. Compare to benchmarks and previous periods.

Business impact: Deals created, revenue won. Connect nurturing to outcomes.

Insights: What you’ve learned. What’s working, what isn’t.

Next steps: What you plan to improve.

What to Skip

Avoid overwhelming with every metric. Focus on what matters:

  • Skip raw send counts (not meaningful without context)
  • Skip step-by-step breakdowns (too detailed for most audiences)
  • Skip technical deliverability details (unless there are problems)

Lead with outcomes and insights, not data dumps.

Visualizations

Simple charts help:

  • Trend lines showing metrics over time
  • Bar charts comparing flows
  • Before/after comparisons when you’ve made improvements

Keep visuals clean and focused. One insight per chart.

Using Data to Improve

Data is only valuable if it drives action.

Content Improvements

Low opens? Rewrite subject lines. Test different approaches.

Low clicks? Strengthen CTAs. Make value proposition clearer.

High unsubscribes at specific step? Revise that email. Change tone or content.

Don’t just observe problems — fix them.

Targeting Improvements

Poor engagement for a segment? Maybe they shouldn’t be nurtured, or need different content.

Strong engagement for unexpected audience? Consider expanding that flow or creating similar ones.

Overlap causing fatigue? Refine criteria to prevent contacts from being in too many flows.

Timing Improvements

Engagement drops sharply after step 3? Maybe the sequence is too long.

Best engagement on step 5? Make sure earlier steps are setting up that content well.

Seasonal patterns? Adjust send timing for holidays or busy periods.

Strategic Improvements

Nurturing not contributing to wins? Revisit the connection between nurturing and re-engagement.

One flow dramatically outperforming others? Study what makes it work. Apply lessons elsewhere.

Diminishing returns overall? Maybe your best contacts have been nurtured. Focus on new segments.

Exporting Performance Data

For deeper analysis or external reporting:

  1. Navigate to the flow’s Performance tab
  2. Click Export (usually top right)
  3. Choose format:
    • CSV — For spreadsheet analysis
    • PDF — For sharing

Exported data includes the metrics visible in the current view with any filters applied.

Using Exports

Custom analysis: Import CSV into Excel or Google Sheets for pivot tables, custom calculations, or combining with other data.

Historical tracking: Export periodically to build a record of performance over time.

Stakeholder reports: Use PDF exports for clean reports to share with others.

Common Questions

How long before I have meaningful data?

Wait until you have at least 100 emails sent before drawing conclusions. Smaller samples produce unreliable metrics. For newer flows, be patient.

Why are my open rates different from what I see in other tools?

Different tools count opens differently. Tracking pixel placement, how they handle privacy features, and what counts as an “open” all vary. Compare trends within Rizer rather than absolute numbers across tools.

Can I see performance for a specific time period?

Yes, use date filters in the Performance tab to view metrics for specific periods. This helps with trend analysis and before/after comparisons.

How do I know if a metric is “bad”?

Compare to benchmarks, your own history, and other flows. A 10% open rate is concerning by industry standards but might be great if you were at 5% last month.

What if a flow has great metrics but no business impact?

Engagement without outcomes means something’s disconnected. Maybe nurtured contacts aren’t becoming ready for callback, or re-engagement isn’t happening. Investigate the hand-off between nurturing and sales action.

Should I pause a flow to analyze it?

Usually not necessary. You can review performance while flows run. Pause only if you’ve identified a problem and need to stop sends while you fix it.

Further reading:

Still stuck? How can we help?