You've hired the growth team. You've bought the tools. And still, every Monday:
“Which channels are actually driving retention?” Silence. A spreadsheet. Three weeks old.
“What did we decide about Dev.to?” Nobody remembers. Buried in chat.
“Who's using us in production?” Shrugs. GitHub stars don't tell you that.
The problem isn't your team. It's not the tools. It's the systems that don't exist yet.
Five systems. One person. 18 weeks.
Who built them?
Nik L
I did.
Five systems. Each one plays a role.
Built at Pieces for Developers. Each started with a problem the growth team kept hitting — and no tool on the market that solved it for our use-case.
OSIRISMemory
knowledge powers all agents
NEXUSListening
DATAMeasurement
SIGNALDiscovery
insights feed distribution
CONTENTDistribution
01 / OSIRIS
Remembers
Company Knowledge System
At Pieces, the growth team discussed competitive positioning last Tuesday. Product feedback from a key account came in on Thursday. By Monday's planning meeting, both are forgotten.
50+ chat spaces, daily meeting transcripts, GitHub, product docs — finding one answer took 15 minutes. Often with no result.
The kinds of questions this answers:
“What new features shipped last week that need marketing content?”
“What did the growth team decide about the Dev.to campaign?”
“What are customers saying about onboarding friction?”
Query
→
Intent Classify
→
Chat
Docs
Git
parallel retrieval
→
CRAG
→
Answer
$osiris ask "What did the growth team decide about the Dev.to campaign?"
Intent: internal | Collections: 3Retrieved: 24 → Reranked → CRAG: 7 passedThe growth team decided to double down on Dev.to
based on attribution data showing 34% activation rate
vs 12% from paid channels.Sources: Growth Sync (Jan 21), Campaign Review (Jan 18)
Latency: 2.8s | Context: 10.2k tokens
The hard part isn't search — it's knowing which results are relevant. OSIRIS grades every retrieved document before including it, so you don't get hallucinated answers mixing last quarter's strategy with this quarter's pivot. The same knowledge base powers marketing agents — competitive monitoring, feedback synthesis, campaign decision history.
At Pieces, the growth team wanted to engage developers on social media, but manual monitoring was unsustainable. Most mentions aren't opportunities — they're noise.
Result: 0-2 engagement opportunities found manually per day. Most missed entirely.
12
opportunities per run — scored, filtered, delivered to Slack
Keywords
→
Scrape
→
Build Thread Trees
→
Enrich Profiles
→
AI Analysis (5×)
→
Slack + Sheets
A single tweet means nothing alone. “Just tried the copilot” — is that a complaint? A recommendation? You can't tell without the full thread. NEXUS walks up the reply chain, fetches the parent conversation, enriches user profiles, then sends the complete context to AI. That's why it finds real opportunities, not noise.
$nexus run --campaign "copilot-monitor"
Scraped: 50 tweets → 31 thread trees builtProfiles enriched | AI analysis (5 concurrent)...Results:
Opportunities: 12(relevance > 0.6)
Noise filtered: 38
Top(0.92)@dev_sarah: "Just tried the copilot...
actually impressed by the context."
↳ Thread: 4 replies, 2 influential devs engaged→ Reply drafted → Slack → Sheets
Twitter blocks cloud servers. Most companies running social listening from AWS or GCP get IP-banned within days. This system stays under the radar — detects rate limits and backs off automatically. When parallel AI analysis fails, it falls back to sequential processing. No manual restarts. No human intervention. Runs on schedule, delivers to Slack.
I built the entire measurement stack from scratch — from GTM tag management and event taxonomy, through Apache Beam pipelines that pull social data into BigQuery, to the lifecycle funnel model that connects session → feature usage → retention.
GTM Tag Manager + Event Tracking
Container architecture, custom event taxonomy, cross-domain tracking. Built a UTM builder and campaign tracker in Google Sheets that auto-registers campaigns, pulls GA4 data, and calculates CPA, conversion rate, and ROI — the team-facing layer on top of the data infrastructure.
Apache Beam Pipelines
Pulled data from Twitter, LinkedIn, YouTube, TikTok into BigQuery. Idempotent upserts — the pipeline can fail and retry without creating duplicates.
Lifecycle Funnel Model
Custom source-mapping + join logic in BigQuery. Connected session → feature usage → retention with multi-touch attribution. First-touch, last-click, and channel grouping.
-- Full-funnel: acquisition → activation → retentionWITH user_journey AS (
SELECT
u.user_id,
s.first_touch_source,
f.features_activated,
r.retained_d7
FROM users_enriched u
JOIN sessions_attributed s ON u.user_id = s.user_id
JOIN feature_usage f ON u.user_id = f.user_id
JOIN retention_cohorts r ON u.user_id = r.user_id
)
SELECT first_touch_source,
COUNT(*) as signups,
ROUND(AVG(features_activated), 1) as avg_features,
ROUND(AVG(retained_d7) * 100) as d7_retention
FROM user_journey GROUP BY 1;
source │ signups │ features │ D7 ret
─────────────────┼─────────┼──────────┼────────
dev_to_blog │ 342 │ 4.2 │ 34%twitter_organic │ 187 │ 3.8 │ 28%
linkedin_ads │ 523 │ 2.1 │ 14%
LinkedIn's API gives you 100-500 calls per day. One bad pipeline run burns your entire daily budget on error responses. This system detects the limit on the first rejection and stops all subsequent calls gracefully — so tomorrow's quota isn't wasted. The entire platform is env-based: new client = new .env file, same code.
Developer tools companies have a visibility problem: lots of GitHub activity, but who's actually using it in production? The growth team at Pieces needed to find high-intent developers — not just stargazers.
Tools like reo.dev see awareness signals — doc visits, stars, forks. The gap: between “someone starred a repo” and “someone is running a tool in CI/CD at a fintech company.”
Entity:karan-sharmaCompany: Zerodha (India's largest stock broker)Score:85 → Tier 2 (Active Development)Confidence: HIGH
Signals:
├── SDK integration (+20)— from openhands.sdk import
├── Active development (+15)— 11 commits in 2 months
├── SDK version tracking (+10)— v1.4.1 → v1.8.1 → latest
├── Public project (+5)— "hodor" PR reviewer (18★)
└── Company affiliation (+15)— Zerodha in GitHub bioUse case: PR code review automation→ AI draft generated (Technical tone)
Entity Resolution
Union-Find algorithm linking identities across platforms. Fuzzy username matching (jane-doe, janedoe, jane.doe → same person). Email domain → company inference. Multi-hop resolution: GitHub → commit email → company → LinkedIn.
Signal Triangulation
Source multiplier: 1.0× → 1.2× → 1.5× based on platform count. 9 tier promotion rules (Full Integration, CI Integration, Blocked Builder, Comparison Shopper). Confidence scoring with maturity assessment and outreach timing recommendations.
RAG-Powered Personalization
Analyzes their actual code patterns. Queries product knowledge base (OSIRIS). Suggests complementary features they could use. Generates tone variants (Technical, Conversational, Enterprise).
This replaces 3-4 tools (Apollo, Clearbit, reo.dev, LinkedIn Sales Navigator) with one system that finds production evidence, not just contact data. When signals come from 3+ platforms, confidence multiplies. The system refuses to generate outreach for low-confidence matches — won't spam someone who just starred a repo. It also refuses to attribute below 60% confidence score. After deployment, most changes are configuration — not rewrites.
System Output — AI-Generated Emails
Example from detecting a developer using OpenHands SDK. Three tone variants generated automatically:
Best for: Developer-to-developer conversation about the SDK
Subject: hodor + OpenHands SDK
Hey Karan,
I came across hodor, the agentic PR reviewer you built on the OpenHands SDK. The approach of using multi-step reasoning instead of single-pass LLM prompting is exactly what we were hoping people would build with the SDK.
A couple of things that might be useful based on your recent commits:
• We just shipped improved workspace persistence in 1.9 — might help with the LocalWorkspace patterns you're using
• There's a new project skills API that could simplify some of the tool orchestration
Happy to walk through either if useful. Also curious what pain points you've hit - feedback from folks actually building on the SDK is gold.
At Pieces, the content team writes blog posts. But adapting each one for Dev.to, LinkedIn, X, and Medium takes another 2+ hours. Each platform has different rules. Consistency drops. Some platforms get skipped.
And nobody knows: is the Dev.to version actually good? Does the LinkedIn version accidentally plagiarize the original?
Input
1 blog post ~2,000 words company.com/blog
Output
Dev.to 87 LinkedIn 82 X thread 79 Medium 75
$engine publish --url company.com/blog/release
Scraping... analyzing... generating variants...Quality scoring (100-point rubric)...Platform │ Score │ Status
─────────────┼───────┼────────
Dev.to │ 87 │ Published
LinkedIn │ 82 │ Published
X thread │ 79 │ Published
Medium │ 75 │ PublishedTime: 12s | Dedup: clean | Platforms: 4/7
Every platform has different rules. Dev.to wants technical depth with code blocks. LinkedIn wants professional tone in 1,300 characters. X wants hooks in 280 characters. The system scores each variant on a 100-point rubric — and refuses to output anything below 70. If two variants are more than 85% similar, it regenerates without human intervention. No daily babysitting. No quality review loops.
Before Pieces, I was the first marketer at SuprSend — an infrastructure startup creating a new category: notification infrastructure.
No marketing ops. No RevOps. No data team. No one had done GTM for this category before — because the category didn't exist.
I ran campaigns. Content, SEO, guest posts, social distribution. The campaigns worked. But every workflow I needed, I had to build myself. Every report, every integration, every automation.
That's when I realized: the campaigns were never the bottleneck. The systems were.
“Resourceful and independent... built our entire inbound engine from scratch and turned it into a steady pipeline of leads that directly converted into demos. For any early-stage startup looking for someone who can own GTM end-to-end, Nikhil is the person you want in your corner.”— Nikita Navral, Co-founder, SuprSend
Five systems.
Still one person.
Still running.
How long can you afford to keep answering Monday's questions on Wednesday?
Every week you wait, your competitors are building systems.