diff --git a/src/content/blog/technical/documentation-metrics-and-analytics-a-complete-guide-for-dev.mdx b/src/content/blog/technical/documentation-metrics-and-analytics-a-complete-guide-for-dev.mdx
new file mode 100644
index 00000000..f5c81c8e
--- /dev/null
+++ b/src/content/blog/technical/documentation-metrics-and-analytics-a-complete-guide-for-dev.mdx
@@ -0,0 +1,193 @@
+I'll conduct deep research on documentation metrics and analytics before writing the article.
+Now I have sufficient research to write a thorough, data-backed article. Let me compose it.
+
+---
+title: 'Documentation Metrics and Analytics: A Complete Guide for Developer-Facing Companies'
+subtitle: Published March 2026
+description: >-
+ Learn which documentation metrics actually matter for developer-facing companies, how to measure them, and how to connect doc quality to business outcomes.
+date: '2026-03-27T00:00:00.000Z'
+author: Frances
+section: Technical
+hidden: true
+---
+import BlogNewsletterCTA from '@components/site/BlogNewsletterCTA.astro';
+import BlogRequestDemo from '@components/site/BlogRequestDemo.astro';
+
+## Why Documentation Metrics Matter for Developer-Facing Companies
+
+Most developer-facing companies treat documentation as a cost center with no clear way to measure its impact. Unlike sales pipelines or marketing funnels, docs don't generate revenue in a way that's immediately visible on a dashboard. But the data tells a different story.
+
+
+The 2024 DORA State of DevOps Report, based on a decade of research across tens of thousands of professionals, found that teams with high-quality documentation were more than twice as likely to meet or exceed their performance targets. Documentation also improved performance against the other core DORA metrics.
+
+DORA estimates that a 25% increase in AI adoption could yield a 7.5% boost in documentation quality — "how reliable, up-to-date, findable, and accurate it is" — the highest of all factors in their prediction model.
+
+
+On the developer experience side,
+each one-point improvement in the documentation dimension of the Developer Experience Index (DXI), developed by DX from data across 40,000+ developers at 800 organizations, correlates to 13 minutes per developer per week in time savings.
+
+For a 100-person engineering team, a 5-point DXI improvement driven by better documentation translates to roughly 5,000 hours annually — approximately $500K in productivity gains.
+
+
+These findings make one thing clear: documentation is measurable, and measuring it unlocks real business value. This guide covers what to track, how to track it, and how to connect documentation analytics to outcomes that engineering and business leaders care about.
+
+## The Metrics That Actually Matter
+
+Not all documentation metrics are created equal. A common mistake — flagged by Bob Watson, senior technical writer at Google, in an interview with Hackmamba — is working backward from numbers rather than starting with a clear purpose. Management pressures writers to provide measurable data, pushing them toward easy metrics like page views without considering relevance. The key is to start with what question you're trying to answer, then pick the metric that answers it.
+
+### Time to First Call (TTFC)
+
+For API-focused companies, no single metric captures documentation effectiveness like Time to First Call.
+The time a developer spends progressing from signing up to making their first API call can be attributed to the effectiveness of your getting-started guide and product usability.
+ Postman calls TTFC "the most important API metric" for public APIs, and the reasoning is straightforward:
+if you are not investing in TTFC, you are limiting the size of your potential developer base throughout your remaining adoption funnel.
+
+
+
+Twilio has historically referenced their efforts to enable developers to "get up and running in 5 minutes or less."
+
+Stripe simplifies payments with clear APIs and instant integration — famously requiring just seven lines of code.
+
+According to benchmarks from Young Copy, champion-level TTFC scores attract developer adoption at rates three to four times higher than slow-onboarding APIs. Even moving from "needs work" to "competitive" TTFC often brings a 40–60% jump in conversion rates.
+
+
+**How to measure it:** Instrument your signup-to-first-successful-request flow with event tracking.
+While the average time to a first successful API call might be two hours, the 99th percentile could reveal outliers who take significantly longer — helping identify specific pain points not obvious from averages alone.
+
+
+### Documentation Time to Value (DTTV)
+
+ReadMe introduced the concept of Documentation Time to Value as a broader cousin of TTFC.
+DTTV measures the duration it takes for a new developer to find and understand the relevant information in your API documentation to achieve their first meaningful outcome. This metric is crucial for assessing the effectiveness of your documentation and the overall developer onboarding experience.
+
+
+
+A consistently high DTTV or increasing abandonment rates during the documentation exploration phase can indicate that your documentation needs improvement.
+ Unlike TTFC, DTTV captures the full journey — from landing on your docs to accomplishing something meaningful — making it applicable beyond API products.
+
+### Zero-Result Search Rate
+
+
+In developer experience, the goal is to offer developers an efficient, effective, delightful experience — none of which describes hitting the wall of "zero results returned" when searching technical documentation.
+ Zero-result searches are one of the most actionable metrics available because they directly tell you what content is missing or misnamed.
+
+
+Zero-result searches pinpoint documentation gaps. When developers search for content that doesn't exist, they tell you exactly what's missing. Many companies don't track metrics effectively, but zero-result queries offer direct insight. A query returning no results 200 times monthly deserves immediate attention, while single-occurrence searches can wait.
+
+
+A real-world example demonstrates the impact:
+a B2B technology services company implemented search analytics monitoring across its internal knowledge base and customer-facing support content. Within the first monthly review cycle, the team identified that 23% of all queries were returning zero results. Within two quarters, zero-result rates dropped to 6%, support escalation rates declined by 18%, and sales reported measurably faster access to competitive materials during deal cycles.
+
+
+**Tools:**
+You can capture search data with purpose-built tools like Algolia (DocSearch) or configure Google Analytics.
+ Fern, GitBook, and Document360 all include built-in search analytics dashboards that surface zero-result queries without additional setup.
+
+
+
+## Support Ticket Deflection: The Business Case for Better Docs
+
+For most developer-facing companies, the single most compelling metric for executive buy-in is support ticket deflection — the rate at which documentation resolves issues before they become support tickets.
+
+
+Industry data consistently shows self-service interactions cost approximately $0.10 per interaction compared to $12 for live support — a 120x difference.
+
+The SaaS industry benchmark for average support ticket cost runs between $25 and $35 per ticket according to SaaS Capital.
+
+
+
+The most detailed ROI calculation comes from Zoomin Software's 2023 report featuring a composite B2B SaaS company handling 100,000 support tickets yearly. After integrating documentation within their support process, 30,000 tickets were resolved through customer self-service — a 39% case deflection rate — yielding $2,270,000 in annual savings across entry-level support, advanced support, and product/R&D escalations.
+
+
+
+In Zoomin's benchmark data, customer self-service rates soared to 97% for top performers, and companies achieved case deflection rates ranging from 60% to 70% by delivering comprehensive and easily accessible technical content.
+
+
+
+An industry benchmark for an effective case deflection rate is around 58%
+, but what you target depends on your product's complexity and your documentation maturity. Start by establishing your current baseline — the ratio of doc site sessions to support tickets submitted — and measure improvement over time.
+
+## AI-Era Documentation Metrics
+
+As AI chatbots become the primary interface to documentation, a new layer of metrics is emerging. If your developer docs include an embedded AI assistant, you need to track its performance as carefully as the underlying docs it draws from.
+
+### Unanswered and Fallback Queries
+
+
+Keeping unanswered searches as low as possible is critical — unanswered searches typically occur when relevant information is missing in the documentation, preventing the AI from retrieving data for specific queries. Keeping documentation up-to-date helps reduce such instances.
+ Document360's Eddy AI analytics dashboard, for example, tracks answered vs. unanswered query ratios, likes and dislikes on AI responses, and conversation depth metrics.
+
+
+An embedded AI chatbot gives developers a faster way to find answers while giving you visibility into questions that traditional page analytics can't capture. Every chatbot interaction reveals what developers are struggling with and where documentation falls short.
+
+
+### Response Quality and Feedback Loops
+
+
+Analytics reveals which questions the AI struggles with (low satisfaction scores), identifies knowledge gaps (frequently asked but unanswered topics), and shows conversation drop-off points.
+ Key metrics to track include thumbs up/down ratios on AI responses, escalation rate to human agents, and fallback rate by topic.
+
+For LLM-powered doc assistants, Microsoft's Data Science team recommends tracking search relevance with Precision@K metrics and using LLM-as-a-judge approaches for automated quality evaluation at scale.
+
+### AI Crawler Traffic
+
+
+Fern recommends looking for analytics tools that identify AI crawlers by user agent strings. This lets you see which providers (OpenAI, Anthropic, Google, etc.) are indexing your documentation and track overall bot traffic volume.
+ This metric matters because AI agents are increasingly consuming your docs on behalf of developers — and you need to know whether your docs are structured for that reality.
+
+## Building a Measurement Practice: From Ad Hoc to Systematic
+
+The difference between teams that improve their docs and teams that don't usually comes down to how systematically they review their metrics. One useful framework comes from content analytics firm Earley Information Science:
+
+
+At the weekly level, the focus is on dashboard monitoring — tracking zero-result rates, watching for new queries appearing in high volume, and flagging anomalies. Monthly analysis goes deeper: the top twenty zero-result queries tell you what content needs to be created, the bottom twenty satisfaction scores tell you what needs to be improved, and the output is a prioritized improvement list that informs the content team's work for the following month.
+
+
+### Choosing Your Tools
+
+The right tooling depends on your documentation platform and what you need to measure:
+
+- **Built-in platform analytics** (GitBook, Fern, Document360, ReadMe) provide doc-specific metrics like search queries, page-level engagement, and API Explorer errors without additional configuration.
+A documentation platform with purpose-built analytics connects metrics directly to improvements. Generic web analytics tools require custom event tracking, complex filtering, and manual correlation to understand developer behavior.
+
+- **General web analytics** (Google Analytics 4, Plausible, PostHog) give you traffic patterns, referral sources, and user flow — useful for understanding how developers arrive at your docs and where they drop off.
+- **Search analytics** (Algolia Analytics, AddSearch) provide dedicated dashboards for zero-result rates, click-through rates on search results, and vocabulary mismatch detection.
+- **Session replay tools** (Microsoft Clarity, Hotjar) show how developers actually navigate your docs — where they scroll, what they click, and where they get stuck.
+- **LLM observability platforms** (Langfuse, LangSmith) provide conversation tracing, automated evaluation, and cost tracking for AI-powered doc assistants.
+
+### Connecting Docs Metrics to Business Outcomes
+
+The most common failure in documentation measurement is collecting data without connecting it to outcomes that leadership cares about. Here's how to bridge that gap:
+
+| Documentation Metric | Business Outcome | How to Connect |
+|---|---|---|
+| Time to First Call (TTFC) | Developer conversion rate | Correlate TTFC with trial-to-paid conversion by cohort |
+| Zero-result search rate | Content coverage | Map zero-result queries to product features lacking docs |
+| Support ticket deflection | Cost savings | Multiply deflected tickets by average cost per ticket |
+| Doc satisfaction scores | Net Promoter Score (NPS) | Cross-reference doc feedback with customer health scores |
+| AI chatbot resolution rate | Support efficiency | Track ratio of AI-resolved queries to total queries |
+
+
+GetDX's research demonstrates how this works in practice: organizations can see how their documentation score compares to industry benchmarks and how it correlates with velocity, quality, and efficiency. Each one-point DXI improvement correlates to 13 minutes per developer per week — creating a clear business case: invest $100K in documentation tooling and headcount, get $500K in productivity return.
+
+
+## What to Track First
+
+If you're starting from zero, don't try to track everything at once. Begin with three metrics:
+
+1. **Zero-result search rate** — gives you an immediate, actionable backlog of content to create or rename.
+It's a great opportunity for a success metric you can clearly track and improve. It gives you a ratio for KPIs and a curated list of terms you can iterate on.
+
+2. **Support ticket volume correlated with doc page views** — establishes the baseline for measuring deflection.
+HappyFox enterprise data indicates up to 80% of support tickets address issues already covered in existing knowledge bases
+, suggesting the problem is often findability, not missing content.
+3. **Page-level feedback (thumbs up/down)** — the simplest qualitative signal, and the one most likely to reveal pages that exist but don't actually help.
+
+From there, layer in TTFC tracking if you have an API product, AI chatbot analytics if you have an embedded assistant, and developer satisfaction surveys on a quarterly cadence.
+GetDX recommends quarterly developer experience surveys with 5–10 focused questions. This frequency lets you track trends without survey fatigue.
+
+
+Documentation metrics aren't a one-time audit. They're an operating practice — one that, when done well, transforms docs from a static cost center into a measurable driver of developer adoption, retention, and support efficiency.
+
+