Logfire.jl
Julia client for Pydantic Logfire - OpenTelemetry-based observability for LLM applications.
Features
- OpenTelemetry Integration - Full OTEL support for tracing LLM calls
- PromptingTools.jl Support - Automatic instrumentation of
aigenerate,aitools,aiextract - GenAI Semantic Conventions - Compliant with OTEL GenAI specs
- Query API - Download your telemetry data using SQL
- Alternative Backends - Send data to Jaeger, Langfuse, or any OTEL-compatible backend
- Exception Tracking - Automatic exception capture with full stacktraces
Quick Start
using DotEnv
DotEnv.load!() # Load .env file (must call explicitly)
using Logfire
using PromptingTools
# Configure Logfire (uses LOGFIRE_TOKEN from environment)
Logfire.configure(service_name="my-app")
# Instrument PromptingTools
Logfire.instrument_promptingtools!()
# All LLM calls are now traced
response = aigenerate("What is 2+2?")Manual Schema Wrapping (No Auto-Instrumentation)
If you prefer not to use auto-instrumentation, you can explicitly wrap any PromptingTools schema:
using Logfire, PromptingTools
Logfire.configure(service_name = "my-app")
# Wrap the schema you want to trace
schema = PromptingTools.OpenAISchema() |> Logfire.LogfireSchema
# Use it directly - no instrument_promptingtools!() needed
aigenerate(schema, "Hello!"; model = "gpt-5-mini")This gives you fine-grained control over which calls are traced.
Authentication
Set your Logfire token via one of:
.envfile withLOGFIRE_TOKEN=...(callDotEnv.load!()first)- Environment variable:
ENV["LOGFIRE_TOKEN"] = "..." - Direct argument:
Logfire.configure(token="...")
What Gets Captured
- Request params: model, temperature, topp, maxtokens, stop, penalties
- Usage: input/output/total tokens, latency, cost
- Provider metadata: model returned, status, finishreason, responseid
- Tool/function calls: count + full payload
- Conversation: roles + content for all messages
- Exceptions: type, message, and full stacktrace
Configuration Options
Logfire.configure(
token = "...", # Logfire write token (or use LOGFIRE_TOKEN env)
service_name = "my-app", # Service name for telemetry
service_version = "1.0.0", # Service version
environment = "production", # Deployment environment
send_to_logfire = :if_token_present, # :always, :never, or :if_token_present
endpoint = "...", # Custom OTLP endpoint
auto_record_exceptions = true # Automatic exception capture
)Manual Spans
# Generic span
with_span("my-operation") do span
set_span_attribute!(span, "custom.key", "value")
# do work...
end
# LLM-specific span
with_llm_span("chat"; system="openai", model="gpt-4o") do span
# do LLM work...
record_token_usage!(span, 100, 50)
endException Handling
# Automatic (default)
with_span("risky-operation") do span
error("Oops!") # Automatically captured
end
# Manual
try
risky_operation()
catch e
record_exception!(span, e; backtrace=catch_backtrace())
rethrow()
endDocumentation
- Query API - Download telemetry data using SQL queries
- Alternative Backends - Use Jaeger, Langfuse, or other OTEL backends
- OpenTelemetry GenAI Semantic Conventions - Message formats and span attributes
API Reference
Logfire.ERROR_TYPE_OTHERLogfire.AbstractMessagePartLogfire.BlobPartLogfire.FilePartLogfire.FinishReasonLogfire.GenericPartLogfire.InputMessageLogfire.LogfireConfigLogfire.LogfireQueryClientLogfire.LogfireQueryClientLogfire.LogfireSchemaLogfire.ModalityLogfire.OperationNameLogfire.OutputMessageLogfire.OutputTypeLogfire.ReasoningPartLogfire.RoleLogfire.TextPartLogfire.ToolCallRequestPartLogfire.ToolCallResponsePartLogfire.ToolDefinitionLogfire.UriPartLogfire._extract_tool_call_dataLogfire._message_roleLogfire._parse_query_responseLogfire._record_detailed_usage!Logfire._record_messages_as_attributes!Logfire._record_tool_calls!Logfire._setup_providers!Logfire.add_prompt_attribute!Logfire.add_response_attribute!Logfire.configureLogfire.create_logfire_exporterLogfire.create_resourceLogfire.flush!Logfire.get_configLogfire.instrument_promptingtools!Logfire.instrument_promptingtools_model!Logfire.is_configuredLogfire.messages_to_jsonLogfire.messages_to_jsonLogfire.part_to_dictLogfire.pt_conversation_to_otelLogfire.pt_message_to_inputLogfire.pt_message_to_outputLogfire.query_csvLogfire.query_jsonLogfire.record_exception!Logfire.record_token_usage!Logfire.set_genai_messages!Logfire.set_span_attribute!Logfire.set_span_status_error!Logfire.set_tool_definitions!Logfire.set_tool_definitions!Logfire.should_send_to_logfireLogfire.shutdown!Logfire.system_instructions_to_jsonLogfire.tool_definitions_from_functionsLogfire.tool_definitions_from_ptLogfire.tool_definitions_to_jsonLogfire.tracerLogfire.uninstrument_promptingtools!Logfire.with_llm_spanLogfire.with_spanLogfire.wrap
Logfire.ERROR_TYPE_OTHER — Constant
Error type for GenAI operations.
Well-known value from OTEL semantic conventions:
_OTHER: Fallback error value when no custom value is defined
Custom values may be used for specific error types.
Logfire.AbstractMessagePart — Type
Abstract base type for all message parts.
Logfire.BlobPart — Type
BlobPartRepresents blob binary data sent inline to the model.
Fields
modality::Modality: The general modality (image, video, audio)content::String: Base64-encoded binary contentmime_type::Union{String,Nothing}: IANA MIME type
Logfire.FilePart — Type
FilePartRepresents an external referenced file sent to the model by file ID.
Fields
modality::Modality: The general modality (image, video, audio)file_id::String: Identifier referencing a pre-uploaded filemime_type::Union{String,Nothing}: IANA MIME type
Logfire.FinishReason — Type
Reason for finishing the generation.
Logfire.GenericPart — Type
GenericPartRepresents an arbitrary message part with custom type. Allows extensibility with custom message part types.
Fields
type::String: The type identifierproperties::Dict{String,Any}: Additional properties
Logfire.InputMessage — Type
InputMessageRepresents an input message sent to the model.
Fields
role::Role: Role of the entity that created the messageparts::Vector{AbstractMessagePart}: List of message partsname::Union{String,Nothing}: Optional participant name
Logfire.LogfireConfig — Type
LogfireConfigConfiguration options for Logfire SDK.
Fields
token::Union{String, Nothing}: Logfire write tokenservice_name::String: Service name for telemetryservice_version::Union{String, Nothing}: Service versionenvironment::String: Deployment environment (development, staging, production)send_to_logfire::Symbol: Export control (:iftokenpresent, :always, :never)endpoint::String: OTLP endpoint URLconsole::Bool: Print spans to consolescrubbing::Bool: Enable data scrubbingauto_record_exceptions::Bool: Automatically record exceptions in spans (default: true)
Logfire.LogfireQueryClient — Type
LogfireQueryClientClient for querying Logfire data via the Query API.
Fields
read_token::String: Logfire read token for authenticationendpoint::String: Query API endpoint URL
Example
using Logfire
# Create client (uses LOGFIRE_READ_TOKEN from environment)
client = LogfireQueryClient()
# Or with explicit token
client = LogfireQueryClient(read_token="pylf_v1_us_...")Logfire.LogfireQueryClient — Method
LogfireQueryClient(; read_token=nothing, endpoint=QUERY_ENDPOINT_US)Create a query client for downloading data from Logfire.
Keywords
read_token::String: Read token (or usesLOGFIRE_READ_TOKENenv var)endpoint::String: Query API endpoint (default: US region)
Endpoints
- US:
https://logfire-us.pydantic.dev/v1/query - EU:
https://logfire-eu.pydantic.dev/v1/query
Logfire.LogfireSchema — Type
LogfireSchema(inner::PT.AbstractPromptSchema)Tracer schema that wraps any PromptingTools prompt schema and emits OpenTelemetry GenAI spans. Works with all ai* APIs in PromptingTools.
Logfire.Modality — Type
Modality of media content.
Logfire.OperationName — Type
GenAI operation type.
Well-known values from OTEL semantic conventions:
chat: Chat completion (e.g., OpenAI Chat API)create_agent: Create GenAI agentembeddings: Embeddings operationexecute_tool: Execute a toolgenerate_content: Multimodal content generation (e.g., Gemini)invoke_agent: Invoke GenAI agenttext_completion: Text completions (legacy)
Logfire.OutputMessage — Type
OutputMessageRepresents an output message generated by the model.
Fields
role::Role: Role of the entity that created the messageparts::Vector{AbstractMessagePart}: List of message partsfinish_reason::FinishReason: Reason for finishing generationname::Union{String,Nothing}: Optional participant name
Logfire.OutputType — Type
GenAI output type.
Well-known values from OTEL semantic conventions:
text: Plain textjson: JSON object with known or unknown schemaimage: Imagespeech: Speech
Logfire.ReasoningPart — Type
ReasoningPartRepresents reasoning/thinking content received from the model.
Fields
content::String: Reasoning/thinking content
Logfire.Role — Type
Role of the entity that created the message.
Logfire.TextPart — Type
TextPartRepresents text content sent to or received from the model.
Fields
content::String: Text content
Logfire.ToolCallRequestPart — Type
ToolCallRequestPartRepresents a tool call requested by the model.
Fields
name::String: Name of the toolid::Union{String,Nothing}: Unique identifier for the tool callarguments::Any: Arguments for the tool call (Dict, String, or nothing)
Logfire.ToolCallResponsePart — Type
ToolCallResponsePartRepresents a tool call result sent to the model.
Fields
response::Any: Tool call responseid::Union{String,Nothing}: Unique tool call identifiername::Union{String,Nothing}: Name of the tool that was called
Logfire.ToolDefinition — Type
ToolDefinitionRepresents a tool definition in OpenAI function format.
Fields
name::String: Tool namedescription::String: Tool descriptionparameters::Dict{String,Any}: JSON Schema for parameters
Logfire.UriPart — Type
UriPartRepresents an external referenced file sent to the model by URI.
Fields
modality::Modality: The general modality (image, video, audio)uri::String: URI referencing the datamime_type::Union{String,Nothing}: IANA MIME type
Logfire._extract_tool_call_data — Method
Extract structured tool call data from a vector of tool calls. Handles both ToolMessage objects and Dict representations.
Logfire._message_role — Method
Get the role of a message, using PT's role4render when available.
Logfire._parse_query_response — Method
_parse_query_response(raw, row_oriented) -> Vector{Dict} or DictParse the Logfire API response format into user-friendly data structures.
Logfire._record_detailed_usage! — Method
_record_detailed_usage!(span, ai_msg)Record detailed usage statistics from extras to OTEL GenAI attributes.
Reads unified keys first, falls back to raw provider dicts for backwards compatibility with older PromptingTools versions.
Unified keys supported:
:cache_read_tokens,:cache_write_tokens- cache token usage:cache_write_1h_tokens,:cache_write_5m_tokens- Anthropic ephemeral cache:reasoning_tokens- chain-of-thought tokens:audio_input_tokens,:audio_output_tokens- audio tokens:accepted_prediction_tokens,:rejected_prediction_tokens- prediction tokens:service_tier- provider service tier:web_search_requests- Anthropic server tool usage
Fallback dicts:
:prompt_tokens_details- OpenAI prompt token details:completion_tokens_details- OpenAI completion token details:cache_read_input_tokens,:cache_creation_input_tokens- Anthropic legacy keys
Logfire._record_messages_as_attributes! — Method
Record messages as span attributes using OTEL GenAI semantic conventions.
Sets the following attributes:
- gen_ai.input.messages: Chat history (all messages except final response)
- genai.output.messages: Model response with finishreason
- genai.systeminstructions: System prompt (extracted from conversation)
Uses typed constructs from types.jl for proper JSON serialization.
Logfire._record_tool_calls! — Method
Record tool calls from AIToolRequest or AIMessage.
Handles both:
AIToolRequest.tool_calls(direct field with Vector{ToolMessage})extras[:tool_calls](fallback for AIMessage with tool calls in extras)
Logfire._setup_providers! — Method
_setup_providers!(cfg::LogfireConfig)Initialize OpenTelemetry tracer and meter providers based on configuration.
Logfire.add_prompt_attribute! — Method
add_prompt_attribute!(span, messages::Vector)Add prompt messages as span attributes.
Logfire.add_response_attribute! — Method
add_response_attribute!(span, content::AbstractString)Add response content as a span attribute.
Logfire.configure — Method
configure(; kwargs...)Initialize Logfire SDK with the specified options.
Keywords
token::String: Logfire write token (or use LOGFIRE_TOKEN env var)service_name::String: Name of the service (default: "julia-app")service_version::String: Version of the serviceenvironment::String: Deployment environment (default: "development")send_to_logfire::Symbol: Export control (:iftokenpresent, :always, :never)endpoint::String: Custom OTLP endpoint (default: Logfire US)scrubbing::Bool: Enable data scrubbing (default: false)console::Bool: Print spans to console (default: false)auto_record_exceptions::Bool: Automatically record exceptions in spans (default: true)
Example
using Logfire
Logfire.configure(
service_name = "my-llm-app",
environment = "production",
auto_record_exceptions = true # Automatically capture exceptions in spans
)Logfire.create_logfire_exporter — Method
create_logfire_exporter(cfg::LogfireConfig) -> OtlpHttpTracesExporterCreate an OTLP HTTP exporter configured for Logfire backend.
Logfire.create_resource — Method
create_resource(cfg::LogfireConfig) -> ResourceCreate an OpenTelemetry Resource with Logfire-compatible attributes.
Logfire.flush! — Method
flush!()Force flush any pending telemetry data. Note: With SimpleSpanProcessor, spans are exported immediately when they end. This function is mainly useful for BatchSpanProcessor, but OpenTelemetrySDK doesn't expose a direct flush API. Spans will be exported automatically.
Logfire.get_config — Method
get_config() -> LogfireConfigGet the current global configuration.
Logfire.instrument_promptingtools! — Method
instrument_promptingtools!(; models=nothing, base_schema=PromptingTools.OpenAISchema())Register Logfire's LogfireSchema tracer for the given model names. If models is nothing, all models currently registered in PromptingTools are instrumented. Throws an error if no models are provided and none can be discovered.
Logfire.instrument_promptingtools_model! — Method
instrument_promptingtools_model!(name; base_schema=PromptingTools.OpenAISchema())Register Logfire tracing for a single model name or alias. Uses the already registered PromptingTools schema when available; otherwise falls back to base_schema. Safe to call multiple times.
Logfire.is_configured — Method
is_configured() -> BoolCheck if Logfire has been configured.
Logfire.messages_to_json — Method
messages_to_json(messages::Vector{InputMessage}) -> StringSerialize input messages to JSON string for gen_ai.input.messages attribute.
Logfire.messages_to_json — Method
messages_to_json(messages::Vector{OutputMessage}) -> StringSerialize output messages to JSON string for gen_ai.output.messages attribute.
Logfire.part_to_dict — Method
Convert any message part to a Dict for JSON serialization.
Logfire.pt_conversation_to_otel — Method
pt_conversation_to_otel(conv; separate_system=true) -> NamedTupleConvert a PromptingTools conversation to OTEL format.
Returns (; input_messages, output_messages, system_instructions).
If separate_system=true, system messages are extracted to system_instructions.
Logfire.pt_message_to_input — Method
pt_message_to_input(msg) -> InputMessageConvert a PromptingTools message to an OTEL InputMessage.
Handles:
- SystemMessage → role=ROLE_SYSTEM
- UserMessage → role=ROLE_USER
- UserMessageWithImages → role=ROLE_USER with BlobPart/UriPart
- AIMessage → role=ROLE_ASSISTANT
- AIToolRequest → role=ROLE_ASSISTANT with ToolCallRequestPart
- ToolMessage → role=ROLE_USER with ToolCallResponsePart
Logfire.pt_message_to_output — Method
pt_message_to_output(msg; finish_reason=nothing) -> OutputMessageConvert a PromptingTools message to an OTEL OutputMessage. Automatically detects finish_reason if not provided.
Logfire.query_csv — Method
query_csv(client, sql; kwargs...) -> StringExecute a SQL query and return CSV data as a string.
Arguments
client::LogfireQueryClient: Query client instancesql::String: SQL query to execute
Keywords
min_timestamp::String: ISO-8601 lower bound for filteringmax_timestamp::String: ISO-8601 upper bound for filteringlimit::Int: Maximum rows to return (default: 500, max: 10000)
Returns
String: CSV-formatted data
Example
client = LogfireQueryClient()
csv_data = query_csv(client, "SELECT span_name, duration FROM records LIMIT 100")
println(csv_data)
# Or write to file
open("export.csv", "w") do f
write(f, csv_data)
endLogfire.query_json — Method
query_json(client, sql; row_oriented=true, kwargs...) -> Vector{Dict} or DictExecute a SQL query and return JSON data.
Arguments
client::LogfireQueryClient: Query client instancesql::String: SQL query to execute
Keywords
row_oriented::Bool=true: If true, returnsVector{Dict}(each row is a dict). If false, returnsDict{String,Vector}(column-oriented)min_timestamp::String: ISO-8601 lower bound for filtering (e.g., "2024-01-01T00:00:00Z")max_timestamp::String: ISO-8601 upper bound for filteringlimit::Int: Maximum rows to return (default: 500, max: 10000)
Returns
- Row-oriented (
row_oriented=true):Vector{Dict}where each element is a row - Column-oriented (
row_oriented=false):Dict{String,Vector}with column names as keys
Example
client = LogfireQueryClient()
# Get recent spans (row-oriented)
rows = query_json(client, "SELECT span_name, duration FROM records LIMIT 10")
for row in rows
println("$(row["span_name"]): $(row["duration"])s")
end
# Get column-oriented data
cols = query_json(client, "SELECT span_name, duration FROM records LIMIT 10"; row_oriented=false)
println("Span names: ", cols["span_name"])Logfire.record_exception! — Method
record_exception!(span, exception; backtrace=nothing, escaped=false)Record an exception on a span following OpenTelemetry semantic conventions.
This function sets the standard OpenTelemetry exception attributes that Logfire recognizes for its specialized exception view:
exception.type: The exception type nameexception.message: The exception messageexception.stacktrace: The full stack trace
It also sets the span status to error and the log level to 'error'.
Arguments
span: The span to record the exception onexception: The exception object to recordbacktrace: Optional backtrace. If not provided, attempts to usecatch_backtrace()if called within a catch block. Pass the backtrace explicitly for best results.escaped: Whether the exception message should be escaped (default: false, reserved for future use)
Example
try
error("Something went wrong")
catch e
bt = catch_backtrace()
record_exception!(span, e; backtrace=bt)
rethrow()
endOr more simply, if called within the catch block:
try
error("Something went wrong")
catch e
record_exception!(span, e) # Will attempt to get backtrace automatically
rethrow()
endThis is equivalent to Python's logfire.exception() or span.record_exception().
Logfire.record_token_usage! — Method
record_token_usage!(span, input_tokens::Int, output_tokens::Int; model::String="")Record token usage on a span following GenAI semantic conventions.
Logfire.set_genai_messages! — Method
set_genai_messages!(span, conv; separate_system=true)Set genai.input.messages, genai.output.messages, and genai.systeminstructions on a span from a PromptingTools conversation.
Logfire.set_span_attribute! — Method
set_span_attribute!(span, key::String, value)Set an attribute on a span.
Logfire.set_span_status_error! — Method
set_span_status_error!(span, message::String)Set span status to error with a message.
Logfire.set_tool_definitions! — Method
set_tool_definitions!(span, tool_map)Set gen_ai.tool.definitions on a span from PromptingTools tool signatures.
Logfire.set_tool_definitions! — Method
set_tool_definitions!(span, tools::Vector)Set gen_ai.tool.definitions on a span from a vector of functions.
Logfire.should_send_to_logfire — Function
should_send_to_logfire(cfg::LogfireConfig) -> BoolDetermine if telemetry should be sent to Logfire based on configuration.
Logfire.shutdown! — Method
shutdown!()Gracefully shutdown the Logfire SDK, flushing any pending telemetry. Note: With SimpleSpanProcessor, spans are exported immediately when they end, so shutdown is mainly for cleanup. For BatchSpanProcessor, this would flush pending spans.
Logfire.system_instructions_to_json — Method
system_instructions_to_json(parts::Vector{<:AbstractMessagePart}) -> StringSerialize system instructions to JSON string for genai.systeminstructions attribute.
Logfire.tool_definitions_from_functions — Method
tool_definitions_from_functions(tools::Vector) -> Vector{ToolDefinition}Create tool definitions from a vector of functions using PT.toolcallsignature.
Logfire.tool_definitions_from_pt — Method
tool_definitions_from_pt(tool_map) -> Vector{ToolDefinition}Convert PromptingTools tool signatures to OTEL tool definitions.
Example
get_weather(city::String) = "sunny"
tools = [get_weather]
tool_map = PT.tool_call_signature(tools)
defs = tool_definitions_from_pt(tool_map)Logfire.tool_definitions_to_json — Method
tool_definitions_to_json(tools::Vector{ToolDefinition}) -> StringSerialize tool definitions to JSON string for gen_ai.tool.definitions attribute.
Logfire.tracer — Function
tracer(name::String = "logfire") -> TracerGet a tracer instance for creating spans.
Logfire.uninstrument_promptingtools! — Method
uninstrument_promptingtools!()Best-effort removal. Since PromptingTools does not expose a deregistration hook, this currently logs a warning and leaves registrations intact.
Logfire.with_llm_span — Method
with_llm_span(f, operation::String; system="openai", model="", kwargs...)Create a span for an LLM operation with GenAI semantic convention attributes.
Arguments
f: Function to execute within the spanoperation: The operation name (e.g., "chat", "embed", "completion")system: The AI system/provider (e.g., "openai", "anthropic")model: The model name/IDkwargs: Additional attributes to set on the span
Logfire.with_span — Method
with_span(f, name::String; attrs...)Create a span with the given name and execute the function within it. The function f receives the span as an argument.
If auto_record_exceptions is enabled in configuration (default: true), exceptions thrown within the span will be automatically recorded using OpenTelemetry semantic conventions before being rethrown.
Example
# With auto_record_exceptions enabled (default)
Logfire.with_span("my-operation") do span
error("This will be automatically recorded!") # Exception automatically captured
end
# Manual exception handling (still works)
Logfire.with_span("my-operation") do span
try
risky_operation()
catch e
Logfire.record_exception!(span, e)
# Handle exception...
end
endLogfire.wrap — Method
wrap(schema::PT.AbstractPromptSchema) -> LogfireSchemaConvenience helper to wrap an existing PromptingTools schema.