Pydantic Logfire

https://pydantic.dev/logfire

Complete observability for LLM applications

Monitor your entire AI application stack, not just the LLM calls. Logfire is a production-grade AI observability platform that also supports general observability. It helps you analyze, debug, and optimize AI systems faster. See LLM interactions and agent behavior alongside standard API requests and database queries in one unified view.

Companies who trust Pydantic Logfire

Understanding

What is an AI observability platform?

An AI observability platform is a tool that provides advanced features beyond traditional monitoring. While standard monitoring tells you that a system failed, an observability tool allows you to identify the underlying causes. In the era of Large Language Models (LLMs) and autonomous agents, this distinction is critical.

An effective AI observability platform allows engineering teams to trace the lifecycle of a prompt, analyze token usage and latency per step, and benchmark model responses against groundedness and toxicity metrics.

The Full Picture

Break down silos: one tool for both AI and general observability

Most engineering teams are forced to use one observability tool for their backend application and a completely separate one for their LLMs. However, problems in production AI applications rarely come from the LLM alone. They hide in the seams: slow database queries that delay context retrieval, API timeouts during agent tool calls, inefficient vector searches, or memory leaks in background tasks. You need visibility across your entire application stack, not just the LLM calls.

What Logfire shows you

  • Complete application traces from request to response
  • Database queries, API calls, and business logic
  • Dashboards and application metrics
  • One platform with first-class AI & general observability for your entire application

What others show you

  • LLM request/response only
  • Missing context on performance bottlenecks
  • No visibility into retrieval quality
  • Separate tools for app monitoring

The Pydantic Stack

From prompt to validated output in one trace

See how Pydantic AI, AI Gateway, and Logfire work together. Define your schema with Pydantic models, extract structured data with an AI agent, route through Gateway for model flexibility, and observe the entire flow in Logfire.

Why Logfire for AI Observability?

OpenTelemetry-Native

Built on industry-standard OpenTelemetry. No vendor lock-in, export to any backend, or use our hosted platform.

Complete Application Traces

See your entire application: LLM calls, agent reasoning, database queries, API requests, vector searches, business logic, JS/TS frontend.

Integrated Evaluation Framework

Use Pydantic Evals to continuously evaluate LLM outputs in production. Curate datasets from production traces and catch regressions before users do.

Real-Time Cost Tracking

Track LLM API costs in real-time. Identify expensive prompts, optimize model selection, and set budget alerts. See exactly where your AI spending goes.

Pydantic AI & AI Gateway Integration

Natively integrates with Pydantic AI and Pydantic AI Gateway for model routing & budget control across all major LLM providers.

From Local Dev to Production

See all app traces in real-time as you code. Catch bugs in development, carry the same observability through to production. No tool switching, no friction.

Query Your Data with SQL

Drill down into your traces with SQL and use Natural Language Processing (NLP) to auto-generate your SQL queries.

Need SSO, custom data retention, or self-hosting? Talk to our team

Open Standards

Monitor your stack with OpenTelemetry

Logfire is built on OpenTelemetry, giving you a unified view of logs, traces, and metrics with no vendor lock-in. Our SDKs for Python, Rust, and TypeScript make instrumentation simple, and power features like live spans that render before they complete.

Logs

Structured and automatically redacted, with every log (span) linked to its trace. Search instantly or query with SQL.

Traces

One end-to-end timeline that combines APIs, databases, third-party calls, LLMs, and AI agents in one view.

Metrics

Track what matters to you: latency, errors, performance, cost, or any trend across your system. Set custom SLOs and alerts to keep your application reliable.

Integrations

Logfire works with your entire stack

Observability should not require a rewrite of your codebase. Built on open standards (OTel) with SDKs for Python, Javascript/Typescript, and Rust, Logfire supports auto-instrumentation for AI frameworks, web frameworks, databases, background workers, browsers, and more.

Python

JavaScript / TypeScript

Rust

Built on the tracing + opentelemetry ecosystem

OpenTelemetry

GoJava.NETRubyPHPErlang/ElixirSwiftC++

Logfire is built on OpenTelemetry. Any language with an OpenTelemetry SDK can send traces, logs, and metrics to Logfire.

Insights

Query your data with full SQL

Query your data with full Postgres flavored SQL — all the control and (for many) nothing new to learn. Even if you don't like writing SQL, LLMs do, so SQL plus an MCP server lets your IDE use Pydantic Logfire as a window into your app's execution. Search for obstacles and opportunities, as you (or the AI) writes code.

diagram showing an IDE using MCP server to query Logfire data

Enterprise Ready

Enterprise-level AI observability

AI applications often process sensitive user data. As a result, enterprise-level AI observability platforms need to meet strict security, compliance, and data privacy standards. Pydantic Logfire is architected to meet the rigorous governance standards of enterprise engineering teams.

Data sovereignty & self-hosting

Industries with strict data residency requirements (Finance, Healthcare, Legal) can make use of our fully self-hosted enterprise plan.

SOC2 Type II certified

Logfire is SOC2 Type II certified. We did not receive any exceptions in our report. A copy is available upon request.

HIPAA compliant

Logfire is HIPAA compliant. We are able to offer Business Associate Agreements (BAAs) to customers on our enterprise plans.

GDPR compliance & EU data region

Pydantic is fully GDPR compliant. For customers who need data kept in the EU, we offer an EU Data Region .

Logfire is already making developers' lives easier

Ready to see your complete AI application?

Start monitoring your LLMs, agents, and entire application stack in minutes. 10 million free spans per month. No credit card required.

Frequently asked questions

FOR DEVELOPERS

Ready to start building?

Logfire's has SDKs for Python, TypeScript/JavaScript, and Rust. The Python SDK is open source under the MIT license and wraps the OpenTelemetry Python package. By default, it will send data to the Logfire platform but you could send data to any OpenTelemetry Protocol (OTLP) compliant endpoint.

{
"by": "ellieh",
"descendants": 79,
"id": 40212490,
"kids": [
40212723,
40218832,
40219139,
40215351,
40216559,
40214627,
40214032,
40214699,
40215008,
40216537,
40216536,
40214894
],
"score": 146,
"time": 1714492570,
"title": "Pydantic Logfire",
"type": "story",
"url": "https://pydantic.dev/logfire"
}
{
"author": null,
"date": null,
"description": "Pydantic’s Logfire: Production-grade AI & general observability built on OpenTelemetry. For LLMs, Agents, API & Apps. Real-time tracing, evals & cost tracking",
"image": "https://pydantic.dev/logfire/opengraph-image.jpg?7ed8026e58f11b14",
"logo": null,
"publisher": "Pydantic",
"title": "Logfire: AI Observability Platform for LLMs, Apps & RAG",
"url": "https://pydantic.dev/logfire"
}
{
"url": "https://pydantic.dev/logfire",
"title": "Logfire: AI Observability Platform for LLMs, Apps & RAG",
"description": "Complete observability for LLM applicationsMonitor your entire AI application stack, not just the LLM calls. Logfire is a production-grade AI observability platform that also supports general observability....",
"links": [
"https://pydantic.dev/logfire"
],
"image": "https://pydantic.dev/logfire/opengraph-image.jpg?7ed8026e58f11b14",
"content": "<div><div><div><p>Complete observability for LLM applications</p><p>Monitor your entire AI application stack, not just the LLM calls. Logfire is a production-grade AI observability platform that also supports general observability. It helps you analyze, debug, and optimize AI systems faster. See LLM interactions and agent behavior alongside standard API requests and database queries in one unified view.</p></div><div><p>Companies who trust Pydantic Logfire</p></div></div><div><p>Understanding</p><div><h2>What is an AI observability platform?</h2></div><div><p>An AI observability platform is a tool that provides advanced features beyond traditional monitoring. While standard monitoring tells you that a system failed, an observability tool allows you to identify the underlying causes. In the era of Large Language Models (LLMs) and autonomous agents, this distinction is critical.</p><p>An effective AI observability platform allows engineering teams to trace the lifecycle of a prompt, analyze token usage and latency per step, and benchmark model responses against groundedness and toxicity metrics.</p></div></div><div><p>The Full Picture</p><div><h2>Break down silos: one tool for both AI and general observability</h2></div><p>Most engineering teams are forced to use one observability tool for their backend application and a completely separate one for their LLMs. However, problems in production AI applications rarely come from the LLM alone. They hide in the seams: slow database queries that delay context retrieval, API timeouts during agent tool calls, inefficient vector searches, or memory leaks in background tasks. You need visibility across your entire application stack, not just the LLM calls.</p><div><div><h3>What Logfire shows you</h3><ul><li><span>✓</span><span><strong>Complete application traces</strong> from request to response</span></li><li><span>✓</span><span><strong>Database</strong> queries, <strong>API</strong> calls, and business logic</span></li><li><span>✓</span><span><strong>Dashboards</strong> and application <strong>metrics</strong></span></li><li><span>✓</span><span><strong>One platform</strong> with <strong>first-class AI &amp; general observability</strong> for your entire application</span></li></ul></div><div><h3>What others show you</h3><ul><li><span>✗</span><span>LLM request/response only</span></li><li><span>✗</span><span>Missing context on performance bottlenecks</span></li><li><span>✗</span><span>No visibility into retrieval quality</span></li><li><span>✗</span><span>Separate tools for app monitoring</span></li></ul></div></div></div><div><p>The Pydantic Stack</p><div><h2>From prompt to validated output in one trace</h2></div><p>See how Pydantic AI, AI Gateway, and Logfire work together. Define your schema with Pydantic models, extract structured data with an AI agent, route through Gateway for model flexibility, and observe the entire flow in Logfire.</p></div><div><h2>Why Logfire for AI Observability?</h2><div><div><h3>OpenTelemetry-Native</h3><p>Built on industry-standard OpenTelemetry. No vendor lock-in, export to any backend, or use our hosted platform.</p></div><div><h3>Complete Application Traces</h3><p>See your entire application: LLM calls, agent reasoning, database queries, API requests, vector searches, business logic, JS/TS frontend.</p></div><div><h3>Integrated Evaluation Framework</h3><p>Use <a target=\"_blank\" href=\"https://logfire.pydantic.dev/docs/guides/web-ui/evals/?utm_source=logfire_webpage\">Pydantic Evals</a> to continuously evaluate LLM outputs in production. Curate datasets from production traces and catch regressions before users do.</p></div><div><h3>Real-Time Cost Tracking</h3><p>Track <a target=\"_blank\" href=\"https://logfire.pydantic.dev/docs/guides/web-ui/llm-panels/#understand-token-cost-badges?utm_source=logfire_webpage\">LLM API costs</a> in real-time. Identify expensive prompts, optimize model selection, and set budget alerts. See exactly where your AI spending goes.</p></div><div><h3>Pydantic AI &amp; AI Gateway Integration</h3><p>Natively integrates with <a target=\"_blank\" href=\"https://ai.pydantic.dev/?utm_source=logfire_webpage\">Pydantic AI</a> and <a target=\"_blank\" href=\"https://pydantic.dev/ai-gateway/?utm_source=logfire_webpage\">Pydantic AI Gateway</a> for model routing &amp; budget control across all major LLM providers.</p></div><div><h3>From Local Dev to Production</h3><p>See all app traces in real-time as you code. Catch bugs in development, carry the same observability through to production. No tool switching, no friction.</p></div><div><h3>Query Your Data with SQL</h3><p>Drill down into your <a target=\"_blank\" href=\"https://logfire.pydantic.dev/docs/reference/sql/?utm_source=logfire_webpage\">traces with SQL</a> and use Natural Language Processing (NLP) to auto-generate your SQL queries.</p></div></div><p>Need SSO, custom data retention, or self-hosting? <a target=\"_blank\" href=\"https://calendar.app.google/k9pkeuNMmzJAJ4Mx5\">Talk to our team</a></p></div><div><p>Open Standards</p><div><h2>Monitor your stack with OpenTelemetry</h2></div><p>Logfire is built on OpenTelemetry, giving you a unified view of logs, traces, and metrics with no vendor lock-in. Our SDKs for Python, Rust, and TypeScript make instrumentation simple, and power features like live spans that render before they complete.</p><div><div><h3>Logs</h3><p>Structured and automatically redacted, with every log (span) linked to its trace. Search instantly or query with SQL.</p></div><div><h3>Traces</h3><p>One end-to-end timeline that combines APIs, databases, third-party calls, LLMs, and AI agents in one view.</p></div><div><h3>Metrics</h3><p>Track what matters to you: latency, errors, performance, cost, or any trend across your system. Set custom SLOs and alerts to keep your application reliable.</p></div></div></div><div><p>Integrations</p><div><h2>Logfire works with your entire stack</h2></div><p>Observability should not require a rewrite of your codebase. Built on open standards (OTel) with SDKs for Python, Javascript/Typescript, and Rust, Logfire supports auto-instrumentation for AI frameworks, web frameworks, databases, background workers, browsers, and more.</p><div><div><h3>Python</h3></div><div><div><h3>JavaScript / TypeScript</h3></div><div><div><h3>Rust</h3></div><p>Built on the tracing + opentelemetry ecosystem</p></div></div><div><div><h3>OpenTelemetry</h3></div><p><span>Go</span><span>Java</span><span>.NET</span><span>Ruby</span><span>PHP</span><span>Erlang/Elixir</span><span>Swift</span><span>C++</span></p><p>Logfire is built on OpenTelemetry. Any language with an OpenTelemetry SDK can send traces, logs, and metrics to Logfire.</p></div></div></div><div><p>Insights</p><div><h3>Query your data with full SQL</h3></div><p>Query your data with full Postgres flavored SQL — all the control and (for many) nothing new to learn. Even if you don't like writing SQL, LLMs do, so SQL plus an MCP server lets your IDE use Pydantic Logfire as a window into your app's execution. Search for obstacles and opportunities, as you (or the AI) writes code.</p><p><img alt=\"diagram showing an IDE using MCP server to query Logfire data\" srcset=\"https://pydantic.dev/cdn-cgi/image/width=1920,quality=75,format=auto/https://pydantic.dev/assets/logfire/query-data.png 1x, https://pydantic.dev/cdn-cgi/image/width=3840,quality=75,format=auto/https://pydantic.dev/assets/logfire/query-data.png 2x\" src=\"https://pydantic.dev/cdn-cgi/image/width=3840,quality=75,format=auto/https://pydantic.dev/assets/logfire/query-data.png\" /></p></div><div><p>Enterprise Ready</p><h2>Enterprise-level AI observability</h2><p>AI applications often process sensitive user data. As a result, enterprise-level AI observability platforms need to meet strict security, compliance, and data privacy standards. Pydantic Logfire is architected to meet the rigorous governance standards of enterprise engineering teams.</p><div><div><h3>Data sovereignty &amp; self-hosting</h3><p>Industries with strict data residency requirements (Finance, Healthcare, Legal) can make use of our fully self-hosted enterprise plan.</p></div><div><h3>SOC2 Type II certified</h3><p>Logfire is SOC2 Type II certified. We did not receive any exceptions in our report. A copy is available upon request.</p></div><div><h3>HIPAA compliant</h3><p>Logfire is HIPAA compliant. We are able to offer Business Associate Agreements (BAAs) to customers on our enterprise plans.</p></div><div><h3>GDPR compliance &amp; EU data region</h3><p>Pydantic is fully GDPR compliant. For customers who need data kept in the EU, we offer an EU <a target=\"_blank\" href=\"https://logfire.pydantic.dev/docs/reference/data-regions/?utm_source=logfire_webpage\">Data Region</a> .</p></div></div></div><section><p></p><h2>Logfire is already making developers' lives easier</h2><p></p></section><div><h2>Ready to see your complete AI application?</h2><p>Start monitoring your LLMs, agents, and entire application stack in minutes. 10 million free spans per month. No credit card required.</p></div><div><h2>Frequently asked questions</h2></div><div><p>FOR DEVELOPERS</p><p>Ready to start building?</p><p>Logfire's has SDKs for Python, TypeScript/JavaScript, and Rust. The Python SDK is open source under the MIT license and wraps the OpenTelemetry Python package. By default, it will send data to the Logfire platform but you could send data to any OpenTelemetry Protocol (OTLP) compliant endpoint.</p></div></div>",
"author": "",
"favicon": "https://pydantic.dev/favicon/favicon.ico",
"source": "pydantic.dev",
"published": "",
"ttr": 184,
"type": ""
}