Logfire.jl

Julia client for Pydantic Logfire - OpenTelemetry-based observability for LLM applications.

Features

  • OpenTelemetry Integration - Full OTEL support for tracing LLM calls
  • PromptingTools.jl Support - Automatic instrumentation of aigenerate, aitools, aiextract
  • GenAI Semantic Conventions - Compliant with OTEL GenAI specs
  • Query API - Download your telemetry data using SQL
  • Alternative Backends - Send data to Jaeger, Langfuse, or any OTEL-compatible backend
  • Exception Tracking - Automatic exception capture with full stacktraces

Quick Start

using DotEnv
DotEnv.load!()  # Load .env file (must call explicitly)

using Logfire
using PromptingTools

# Configure Logfire (uses LOGFIRE_TOKEN from environment)
Logfire.configure(service_name="my-app")

# Instrument PromptingTools
Logfire.instrument_promptingtools!()

# All LLM calls are now traced
response = aigenerate("What is 2+2?")

Manual Schema Wrapping (No Auto-Instrumentation)

If you prefer not to use auto-instrumentation, you can explicitly wrap any PromptingTools schema:

using Logfire, PromptingTools

Logfire.configure(service_name = "my-app")

# Wrap the schema you want to trace
schema = PromptingTools.OpenAISchema() |> Logfire.LogfireSchema

# Use it directly - no instrument_promptingtools!() needed
aigenerate(schema, "Hello!"; model = "gpt-5-mini")

This gives you fine-grained control over which calls are traced.

Authentication

Set your Logfire token via one of:

  • .env file with LOGFIRE_TOKEN=... (call DotEnv.load!() first)
  • Environment variable: ENV["LOGFIRE_TOKEN"] = "..."
  • Direct argument: Logfire.configure(token="...")

What Gets Captured

  • Request params: model, temperature, topp, maxtokens, stop, penalties
  • Usage: input/output/total tokens, latency, cost
  • Provider metadata: model returned, status, finishreason, responseid
  • Tool/function calls: count + full payload
  • Conversation: roles + content for all messages
  • Exceptions: type, message, and full stacktrace

Configuration Options

Logfire.configure(
    token = "...",                    # Logfire write token (or use LOGFIRE_TOKEN env)
    service_name = "my-app",          # Service name for telemetry
    service_version = "1.0.0",        # Service version
    environment = "production",       # Deployment environment
    send_to_logfire = :if_token_present,  # :always, :never, or :if_token_present
    endpoint = "...",                 # Custom OTLP endpoint
    auto_record_exceptions = true     # Automatic exception capture
)

Manual Spans

# Generic span
with_span("my-operation") do span
    set_span_attribute!(span, "custom.key", "value")
    # do work...
end

# LLM-specific span
with_llm_span("chat"; system="openai", model="gpt-4o") do span
    # do LLM work...
    record_token_usage!(span, 100, 50)
end

Exception Handling

# Automatic (default)
with_span("risky-operation") do span
    error("Oops!")  # Automatically captured
end

# Manual
try
    risky_operation()
catch e
    record_exception!(span, e; backtrace=catch_backtrace())
    rethrow()
end

Documentation

API Reference

Logfire.ERROR_TYPE_OTHERConstant

Error type for GenAI operations.

Well-known value from OTEL semantic conventions:

  • _OTHER: Fallback error value when no custom value is defined

Custom values may be used for specific error types.

source
Logfire.BlobPartType
BlobPart

Represents blob binary data sent inline to the model.

Fields

  • modality::Modality: The general modality (image, video, audio)
  • content::String: Base64-encoded binary content
  • mime_type::Union{String,Nothing}: IANA MIME type
source
Logfire.FilePartType
FilePart

Represents an external referenced file sent to the model by file ID.

Fields

  • modality::Modality: The general modality (image, video, audio)
  • file_id::String: Identifier referencing a pre-uploaded file
  • mime_type::Union{String,Nothing}: IANA MIME type
source
Logfire.GenericPartType
GenericPart

Represents an arbitrary message part with custom type. Allows extensibility with custom message part types.

Fields

  • type::String: The type identifier
  • properties::Dict{String,Any}: Additional properties
source
Logfire.InputMessageType
InputMessage

Represents an input message sent to the model.

Fields

  • role::Role: Role of the entity that created the message
  • parts::Vector{AbstractMessagePart}: List of message parts
  • name::Union{String,Nothing}: Optional participant name
source
Logfire.LogfireConfigType
LogfireConfig

Configuration options for Logfire SDK.

Fields

  • token::Union{String, Nothing}: Logfire write token
  • service_name::String: Service name for telemetry
  • service_version::Union{String, Nothing}: Service version
  • environment::String: Deployment environment (development, staging, production)
  • send_to_logfire::Symbol: Export control (:iftokenpresent, :always, :never)
  • endpoint::String: OTLP endpoint URL
  • console::Bool: Print spans to console
  • scrubbing::Bool: Enable data scrubbing
  • auto_record_exceptions::Bool: Automatically record exceptions in spans (default: true)
source
Logfire.LogfireQueryClientType
LogfireQueryClient

Client for querying Logfire data via the Query API.

Fields

  • read_token::String: Logfire read token for authentication
  • endpoint::String: Query API endpoint URL

Example

using Logfire

# Create client (uses LOGFIRE_READ_TOKEN from environment)
client = LogfireQueryClient()

# Or with explicit token
client = LogfireQueryClient(read_token="pylf_v1_us_...")
source
Logfire.LogfireQueryClientMethod
LogfireQueryClient(; read_token=nothing, endpoint=QUERY_ENDPOINT_US)

Create a query client for downloading data from Logfire.

Keywords

  • read_token::String: Read token (or uses LOGFIRE_READ_TOKEN env var)
  • endpoint::String: Query API endpoint (default: US region)

Endpoints

  • US: https://logfire-us.pydantic.dev/v1/query
  • EU: https://logfire-eu.pydantic.dev/v1/query
source
Logfire.LogfireSchemaType
LogfireSchema(inner::PT.AbstractPromptSchema)

Tracer schema that wraps any PromptingTools prompt schema and emits OpenTelemetry GenAI spans. Works with all ai* APIs in PromptingTools.

source
Logfire.OperationNameType

GenAI operation type.

Well-known values from OTEL semantic conventions:

  • chat: Chat completion (e.g., OpenAI Chat API)
  • create_agent: Create GenAI agent
  • embeddings: Embeddings operation
  • execute_tool: Execute a tool
  • generate_content: Multimodal content generation (e.g., Gemini)
  • invoke_agent: Invoke GenAI agent
  • text_completion: Text completions (legacy)
source
Logfire.OutputMessageType
OutputMessage

Represents an output message generated by the model.

Fields

  • role::Role: Role of the entity that created the message
  • parts::Vector{AbstractMessagePart}: List of message parts
  • finish_reason::FinishReason: Reason for finishing generation
  • name::Union{String,Nothing}: Optional participant name
source
Logfire.OutputTypeType

GenAI output type.

Well-known values from OTEL semantic conventions:

  • text: Plain text
  • json: JSON object with known or unknown schema
  • image: Image
  • speech: Speech
source
Logfire.ReasoningPartType
ReasoningPart

Represents reasoning/thinking content received from the model.

Fields

  • content::String: Reasoning/thinking content
source
Logfire.TextPartType
TextPart

Represents text content sent to or received from the model.

Fields

  • content::String: Text content
source
Logfire.ToolCallRequestPartType
ToolCallRequestPart

Represents a tool call requested by the model.

Fields

  • name::String: Name of the tool
  • id::Union{String,Nothing}: Unique identifier for the tool call
  • arguments::Any: Arguments for the tool call (Dict, String, or nothing)
source
Logfire.ToolCallResponsePartType
ToolCallResponsePart

Represents a tool call result sent to the model.

Fields

  • response::Any: Tool call response
  • id::Union{String,Nothing}: Unique tool call identifier
  • name::Union{String,Nothing}: Name of the tool that was called
source
Logfire.ToolDefinitionType
ToolDefinition

Represents a tool definition in OpenAI function format.

Fields

  • name::String: Tool name
  • description::String: Tool description
  • parameters::Dict{String,Any}: JSON Schema for parameters
source
Logfire.UriPartType
UriPart

Represents an external referenced file sent to the model by URI.

Fields

  • modality::Modality: The general modality (image, video, audio)
  • uri::String: URI referencing the data
  • mime_type::Union{String,Nothing}: IANA MIME type
source
Logfire._parse_query_responseMethod
_parse_query_response(raw, row_oriented) -> Vector{Dict} or Dict

Parse the Logfire API response format into user-friendly data structures.

source
Logfire._record_detailed_usage!Method
_record_detailed_usage!(span, ai_msg)

Record detailed usage statistics from extras to OTEL GenAI attributes.

Reads unified keys first, falls back to raw provider dicts for backwards compatibility with older PromptingTools versions.

Unified keys supported:

  • :cache_read_tokens, :cache_write_tokens - cache token usage
  • :cache_write_1h_tokens, :cache_write_5m_tokens - Anthropic ephemeral cache
  • :reasoning_tokens - chain-of-thought tokens
  • :audio_input_tokens, :audio_output_tokens - audio tokens
  • :accepted_prediction_tokens, :rejected_prediction_tokens - prediction tokens
  • :service_tier - provider service tier
  • :web_search_requests - Anthropic server tool usage

Fallback dicts:

  • :prompt_tokens_details - OpenAI prompt token details
  • :completion_tokens_details - OpenAI completion token details
  • :cache_read_input_tokens, :cache_creation_input_tokens - Anthropic legacy keys
source
Logfire._record_messages_as_attributes!Method

Record messages as span attributes using OTEL GenAI semantic conventions.

Sets the following attributes:

  • gen_ai.input.messages: Chat history (all messages except final response)
  • genai.output.messages: Model response with finishreason
  • genai.systeminstructions: System prompt (extracted from conversation)

Uses typed constructs from types.jl for proper JSON serialization.

source
Logfire._record_tool_calls!Method

Record tool calls from AIToolRequest or AIMessage.

Handles both:

  • AIToolRequest.tool_calls (direct field with Vector{ToolMessage})
  • extras[:tool_calls] (fallback for AIMessage with tool calls in extras)
source
Logfire.configureMethod
configure(; kwargs...)

Initialize Logfire SDK with the specified options.

Keywords

  • token::String: Logfire write token (or use LOGFIRE_TOKEN env var)
  • service_name::String: Name of the service (default: "julia-app")
  • service_version::String: Version of the service
  • environment::String: Deployment environment (default: "development")
  • send_to_logfire::Symbol: Export control (:iftokenpresent, :always, :never)
  • endpoint::String: Custom OTLP endpoint (default: Logfire US)
  • scrubbing::Bool: Enable data scrubbing (default: false)
  • console::Bool: Print spans to console (default: false)
  • auto_record_exceptions::Bool: Automatically record exceptions in spans (default: true)

Example

using Logfire

Logfire.configure(
    service_name = "my-llm-app",
    environment = "production",
    auto_record_exceptions = true  # Automatically capture exceptions in spans
)
source
Logfire.create_resourceMethod
create_resource(cfg::LogfireConfig) -> Resource

Create an OpenTelemetry Resource with Logfire-compatible attributes.

source
Logfire.flush!Method
flush!()

Force flush any pending telemetry data. Note: With SimpleSpanProcessor, spans are exported immediately when they end. This function is mainly useful for BatchSpanProcessor, but OpenTelemetrySDK doesn't expose a direct flush API. Spans will be exported automatically.

source
Logfire.instrument_promptingtools!Method
instrument_promptingtools!(; models=nothing, base_schema=PromptingTools.OpenAISchema())

Register Logfire's LogfireSchema tracer for the given model names. If models is nothing, all models currently registered in PromptingTools are instrumented. Throws an error if no models are provided and none can be discovered.

source
Logfire.instrument_promptingtools_model!Method
instrument_promptingtools_model!(name; base_schema=PromptingTools.OpenAISchema())

Register Logfire tracing for a single model name or alias. Uses the already registered PromptingTools schema when available; otherwise falls back to base_schema. Safe to call multiple times.

source
Logfire.messages_to_jsonMethod
messages_to_json(messages::Vector{InputMessage}) -> String

Serialize input messages to JSON string for gen_ai.input.messages attribute.

source
Logfire.messages_to_jsonMethod
messages_to_json(messages::Vector{OutputMessage}) -> String

Serialize output messages to JSON string for gen_ai.output.messages attribute.

source
Logfire.pt_conversation_to_otelMethod
pt_conversation_to_otel(conv; separate_system=true) -> NamedTuple

Convert a PromptingTools conversation to OTEL format.

Returns (; input_messages, output_messages, system_instructions).

If separate_system=true, system messages are extracted to system_instructions.

source
Logfire.pt_message_to_inputMethod
pt_message_to_input(msg) -> InputMessage

Convert a PromptingTools message to an OTEL InputMessage.

Handles:

  • SystemMessage → role=ROLE_SYSTEM
  • UserMessage → role=ROLE_USER
  • UserMessageWithImages → role=ROLE_USER with BlobPart/UriPart
  • AIMessage → role=ROLE_ASSISTANT
  • AIToolRequest → role=ROLE_ASSISTANT with ToolCallRequestPart
  • ToolMessage → role=ROLE_USER with ToolCallResponsePart
source
Logfire.pt_message_to_outputMethod
pt_message_to_output(msg; finish_reason=nothing) -> OutputMessage

Convert a PromptingTools message to an OTEL OutputMessage. Automatically detects finish_reason if not provided.

source
Logfire.query_csvMethod
query_csv(client, sql; kwargs...) -> String

Execute a SQL query and return CSV data as a string.

Arguments

  • client::LogfireQueryClient: Query client instance
  • sql::String: SQL query to execute

Keywords

  • min_timestamp::String: ISO-8601 lower bound for filtering
  • max_timestamp::String: ISO-8601 upper bound for filtering
  • limit::Int: Maximum rows to return (default: 500, max: 10000)

Returns

  • String: CSV-formatted data

Example

client = LogfireQueryClient()
csv_data = query_csv(client, "SELECT span_name, duration FROM records LIMIT 100")
println(csv_data)

# Or write to file
open("export.csv", "w") do f
    write(f, csv_data)
end
source
Logfire.query_jsonMethod
query_json(client, sql; row_oriented=true, kwargs...) -> Vector{Dict} or Dict

Execute a SQL query and return JSON data.

Arguments

  • client::LogfireQueryClient: Query client instance
  • sql::String: SQL query to execute

Keywords

  • row_oriented::Bool=true: If true, returns Vector{Dict} (each row is a dict). If false, returns Dict{String,Vector} (column-oriented)
  • min_timestamp::String: ISO-8601 lower bound for filtering (e.g., "2024-01-01T00:00:00Z")
  • max_timestamp::String: ISO-8601 upper bound for filtering
  • limit::Int: Maximum rows to return (default: 500, max: 10000)

Returns

  • Row-oriented (row_oriented=true): Vector{Dict} where each element is a row
  • Column-oriented (row_oriented=false): Dict{String,Vector} with column names as keys

Example

client = LogfireQueryClient()

# Get recent spans (row-oriented)
rows = query_json(client, "SELECT span_name, duration FROM records LIMIT 10")
for row in rows
    println("$(row["span_name"]): $(row["duration"])s")
end

# Get column-oriented data
cols = query_json(client, "SELECT span_name, duration FROM records LIMIT 10"; row_oriented=false)
println("Span names: ", cols["span_name"])
source
Logfire.record_exception!Method
record_exception!(span, exception; backtrace=nothing, escaped=false)

Record an exception on a span following OpenTelemetry semantic conventions.

This function sets the standard OpenTelemetry exception attributes that Logfire recognizes for its specialized exception view:

  • exception.type: The exception type name
  • exception.message: The exception message
  • exception.stacktrace: The full stack trace

It also sets the span status to error and the log level to 'error'.

Arguments

  • span: The span to record the exception on
  • exception: The exception object to record
  • backtrace: Optional backtrace. If not provided, attempts to use catch_backtrace() if called within a catch block. Pass the backtrace explicitly for best results.
  • escaped: Whether the exception message should be escaped (default: false, reserved for future use)

Example

try
    error("Something went wrong")
catch e
    bt = catch_backtrace()
    record_exception!(span, e; backtrace=bt)
    rethrow()
end

Or more simply, if called within the catch block:

try
    error("Something went wrong")
catch e
    record_exception!(span, e)  # Will attempt to get backtrace automatically
    rethrow()
end

This is equivalent to Python's logfire.exception() or span.record_exception().

source
Logfire.record_token_usage!Method
record_token_usage!(span, input_tokens::Int, output_tokens::Int; model::String="")

Record token usage on a span following GenAI semantic conventions.

source
Logfire.set_genai_messages!Method
set_genai_messages!(span, conv; separate_system=true)

Set genai.input.messages, genai.output.messages, and genai.systeminstructions on a span from a PromptingTools conversation.

source
Logfire.shutdown!Method
shutdown!()

Gracefully shutdown the Logfire SDK, flushing any pending telemetry. Note: With SimpleSpanProcessor, spans are exported immediately when they end, so shutdown is mainly for cleanup. For BatchSpanProcessor, this would flush pending spans.

source
Logfire.system_instructions_to_jsonMethod
system_instructions_to_json(parts::Vector{<:AbstractMessagePart}) -> String

Serialize system instructions to JSON string for genai.systeminstructions attribute.

source
Logfire.tool_definitions_from_ptMethod
tool_definitions_from_pt(tool_map) -> Vector{ToolDefinition}

Convert PromptingTools tool signatures to OTEL tool definitions.

Example

get_weather(city::String) = "sunny"
tools = [get_weather]
tool_map = PT.tool_call_signature(tools)
defs = tool_definitions_from_pt(tool_map)
source
Logfire.tool_definitions_to_jsonMethod
tool_definitions_to_json(tools::Vector{ToolDefinition}) -> String

Serialize tool definitions to JSON string for gen_ai.tool.definitions attribute.

source
Logfire.tracerFunction
tracer(name::String = "logfire") -> Tracer

Get a tracer instance for creating spans.

source
Logfire.uninstrument_promptingtools!Method
uninstrument_promptingtools!()

Best-effort removal. Since PromptingTools does not expose a deregistration hook, this currently logs a warning and leaves registrations intact.

source
Logfire.with_llm_spanMethod
with_llm_span(f, operation::String; system="openai", model="", kwargs...)

Create a span for an LLM operation with GenAI semantic convention attributes.

Arguments

  • f: Function to execute within the span
  • operation: The operation name (e.g., "chat", "embed", "completion")
  • system: The AI system/provider (e.g., "openai", "anthropic")
  • model: The model name/ID
  • kwargs: Additional attributes to set on the span
source
Logfire.with_spanMethod
with_span(f, name::String; attrs...)

Create a span with the given name and execute the function within it. The function f receives the span as an argument.

If auto_record_exceptions is enabled in configuration (default: true), exceptions thrown within the span will be automatically recorded using OpenTelemetry semantic conventions before being rethrown.

Example

# With auto_record_exceptions enabled (default)
Logfire.with_span("my-operation") do span
    error("This will be automatically recorded!")  # Exception automatically captured
end

# Manual exception handling (still works)
Logfire.with_span("my-operation") do span
    try
        risky_operation()
    catch e
        Logfire.record_exception!(span, e)
        # Handle exception...
    end
end
source
Logfire.wrapMethod
wrap(schema::PT.AbstractPromptSchema) -> LogfireSchema

Convenience helper to wrap an existing PromptingTools schema.

source