Train Tracks identify which state users are in right now based on their behavior. Not which email they opened or which button they clicked, but their actual state in the journey.
Once you know the state, you know what they need to do next and what's blocking them from making that transition.
Signed up but hasn't created anything yet. Still figuring out if this is worth their time.
Transition needed: Get them to experience the core value (creating something)
Messaging addresses: "Is this easy enough?" "Will this work for my use case?"
Created a project, seeing what's possible, but hasn't committed yet.
Transition needed: Get them to export/use what they built
Messaging addresses: "Is this good enough to actually use?" "What can I do with this?"
Exported once, got what they needed, might disappear.
Transition needed: Help them see the ongoing need
Messaging addresses: "Why would I need this again?" "What else can this do for me?"
Retained User
Trial User
One-Time User
Slow Burner
Some users activate in hours, others over weeks. Time-based sequences miss this reality.
I map how users actually move through your product using Amplitude/Mixpanel/PostHog. What they actually do, not what you think they do.
Funnel mapping: Where users enter, what they do first, where they get stuck, what behaviors predict conversion
Event log analysis: Which events signal state changes, instrumentation gaps, entry/exit signals
Time window analysis: How long conversions take naturally, when users make decisions, distribution patterns
Output: Documented user states with real behavior patterns, event tracking requirements.
Not all users move the same way. I identify behavioral archetypes based on engagement patterns and customer research.
What this looks like varies completely by product.
For one client, archetypes were based on urgency (someone job searching right now vs. passively updating their resume). For another, it was maturity level (startup founder vs. enterprise PM). For another, usage intensity (daily power user vs. monthly light user).
Train Track Definition:
For each critical transition, I map:
Also covered: Competitive benchmarking, messaging positioning audit, and deliverability audit.
Roadmap prioritization: Which tracks drive the most revenue? Which are easiest to implement? Which have the biggest gaps right now?
You get: Complete audit document, journey maps, train track definitions, user archetypes, competitive insights, deliverability report, prioritized roadmap.
I build sequences across email, in-app, push, SMS - whatever channels fit your lifecycle. For each track, I test different behavioral angles to find what resonates.
We start with ONE message per transition, test if it gets a lift, then build the full sequence once we know what works.
After launch, I run hypothesis-driven experiments. I pick the highest-impact track, test 4-6 different behavioral angles, measure the lift with a holdout group, and the winner becomes the new control.
Testing approach: 1-2K users per variant, 10-20% holdout, focus on one track at a time.
Example: A key retention feature had low adoption (18% of subscribers). Tested different messaging angles to drive usage. Specific user testimonials ("This feature helped me land 3 interviews") outperformed generic benefit statements ("Get 2.3x more interviews") by 4%. That 4% lift in feature adoption translated to measurably higher retention and revenue per user.
You get bi-weekly calls reviewing results, prioritized experiment roadmap, analytics dashboard, archetype performance breakdowns.
When Train Tracks are properly mapped and implemented:
Feature adoption typically lifts 3-5% when messaging addresses the right mindset shift at the right time.
Higher feature adoption directly drives conversion, retention, and expansion revenue.
Campaigns live in Month 1 vs. 6-9 months with traditional approaches.
Traditional lifecycle build: 6 months planning/building + 3 months measuring = 9 months to know what works
My approach: First campaigns live Month 1
The difference isn't the methodology - it's the approach. Start with one message per transition, measure lift, iterate. Don't wait to build the "perfect" system. Get campaigns live, capture revenue, optimize from there.
That's 8 extra months of revenue lift. If a working lifecycle system adds 10% to conversion, you get that 10% for 8 additional months while competitors are still planning.
Impact is always archetype-dependent. Segmented messaging consistently outperforms generic messaging 2-3x.
One-time, 2 weeks
Full audit of your funnel, analytics, existing campaigns, and competitor landscape. I map customer journeys, identify the biggest leaks, and define your train tracks — the state transitions that matter most.
You get: Funnel audit, journey maps, deliverability review, segmentation strategy, train track definitions, prioritized roadmap
3-month minimum
Includes Discovery.
Month 1: Discovery (2 weeks) + first campaigns live
Months 2-3: Build, launch, monitor, and actively iterate campaigns
Multi-channel (email, in-app, push, SMS - whatever fits your lifecycle). I don't just hand off campaigns - I launch them live, monitor performance daily, and continuously test alternatives to find the right messaging.
Value of speed: Campaigns live by Month 1 vs. 6-9 months with traditional approaches. That's 5-8 additional months of revenue lift while competitors are still planning.