OpenTelemetry GenAI Semantic Conventions

Logfire.jl implements the OpenTelemetry GenAI Semantic Conventions for tracing LLM operations, with specific adaptations for Logfire compatibility. This document describes the attributes and formats used.

Logfire-Specific Deviations from OTEL Standard

While Logfire.jl follows the OTEL GenAI semantic conventions, it uses Logfire's message format which differs from the standard in several ways. These deviations ensure proper rendering in the Logfire UI.

Tool Call Responses

The OTEL specification defines ToolCallResponsePart with a response field and role: "tool" for tool result messages. However, Logfire expects:

AspectOTEL StandardLogfire Format
Field nameresponseresult
Message roletooluser
Tool nameNot specifiedname field included

OTEL Standard format:

{
  "role": "tool",
  "parts": [{"type": "tool_call_response", "id": "call_123", "response": "22°C"}]
}

Logfire format (what this library produces):

{
  "role": "user",
  "parts": [{"type": "tool_call_response", "id": "call_123", "name": "get_weather", "result": "22°C"}]
}

Why These Deviations?

Logfire's UI has specific expectations for how tool results are displayed. Using the standard OTEL format results in tool responses being marked as "Unrecognised" in the Logfire dashboard. The Logfire format ensures:

  1. Proper visualization - Tool results render correctly in the conversation view
  2. Tool identification - The name field allows Logfire to associate results with their corresponding tool calls
  3. Role consistency - Using role: "user" matches how Logfire processes tool results internally

Reference Specifications

References

Operation Types

The gen_ai.operation.name attribute identifies the type of GenAI operation:

ValueDescriptionExample
chatChat completionOpenAI Chat API, aigenerate
create_agentCreate GenAI agentAgent initialization
embeddingsEmbeddings operationaiembed
execute_toolExecute a toolTool execution spans
generate_contentMultimodal content generationGemini Generate Content
invoke_agentInvoke GenAI agentAgent invocation
text_completionText completions (legacy)Legacy completions API

Span Attributes

Required Attributes

AttributeTypeDescription
gen_ai.operation.namestringOperation type (see above)
gen_ai.provider.namestringProvider identifier (e.g., "openai", "anthropic")

Request Attributes

AttributeTypeDescription
gen_ai.request.modelstringModel requested (e.g., "gpt-4o-mini")
gen_ai.request.temperaturedoubleTemperature setting
gen_ai.request.max_tokensintMaximum tokens for response
gen_ai.request.top_pdoubleTop-p sampling setting
gen_ai.request.frequency_penaltydoubleFrequency penalty
gen_ai.request.presence_penaltydoublePresence penalty
gen_ai.request.stop_sequencesstring[]Stop sequences
gen_ai.request.seedintRandom seed if used

Response Attributes

AttributeTypeDescription
gen_ai.response.modelstringModel that generated response
gen_ai.response.idstringCompletion identifier
gen_ai.response.finish_reasonsstring[]Why model stopped generating
gen_ai.usage.input_tokensintTokens in prompt
gen_ai.usage.output_tokensintTokens in response

Output Type

The gen_ai.output.type attribute describes the output format:

ValueDescription
textPlain text
jsonJSON object (known or unknown schema)
imageImage
speechSpeech

Message Formats

Messages follow a parts-based format with the following structure.

Input Messages (gen_ai.input.messages)

Array of messages sent to the model:

[
  {
    "role": "user",
    "parts": [{"type": "text", "content": "What's the weather?"}]
  },
  {
    "role": "assistant",
    "parts": [
      {"type": "tool_call", "id": "call_123", "name": "get_weather", "arguments": {"city": "Paris"}}
    ]
  },
  {
    "role": "user",
    "parts": [
      {"type": "tool_call_response", "id": "call_123", "name": "get_weather", "result": "22°C, sunny"}
    ]
  }
]

Note: Tool call responses use role: "user" and result field for Logfire compatibility. See Logfire-Specific Deviations for details.

Output Messages (gen_ai.output.messages)

Array of messages returned by the model (includes finish_reason):

[
  {
    "role": "assistant",
    "parts": [{"type": "text", "content": "The weather in Paris is 22°C and sunny."}],
    "finish_reason": "stop"
  }
]

System Instructions (gen_ai.system_instructions)

System prompt separate from chat history:

[
  {"type": "text", "content": "You are a helpful weather assistant."}
]

Message Roles

RoleDescription
systemSystem instructions
userUser input (also used for tool execution results in Logfire format)
assistantModel response

Note: The OTEL standard defines a tool role, but Logfire expects tool results to use role: "user". See Logfire-Specific Deviations.

Message Part Types

TextPart

Plain text content:

{"type": "text", "content": "Hello, world!"}

ToolCallRequestPart

Tool call requested by the model:

{
  "type": "tool_call",
  "id": "call_abc123",
  "name": "get_weather",
  "arguments": {"city": "Paris", "unit": "celsius"}
}

ToolCallResponsePart

Tool execution result (Logfire format):

{
  "type": "tool_call_response",
  "id": "call_abc123",
  "name": "get_weather",
  "result": "22°C, sunny"
}

Note: Uses result (not response) and includes name for Logfire compatibility.

BlobPart

Inline binary data (base64-encoded):

{
  "type": "blob",
  "modality": "image",
  "mime_type": "image/png",
  "content": "iVBORw0KGgoAAAANSUhEUgAA..."
}

UriPart

External file by URI:

{
  "type": "uri",
  "modality": "image",
  "uri": "https://example.com/image.png"
}

FilePart

Pre-uploaded file by ID:

{
  "type": "file",
  "modality": "image",
  "file_id": "file-abc123"
}

ReasoningPart

Model reasoning/thinking content:

{"type": "reasoning", "content": "Let me think about this..."}

Finish Reasons

ValueDescription
stopNatural completion
lengthMax tokens reached
content_filterContent filtered
tool_callModel requested tool execution
errorError occurred

Tool Definitions (gen_ai.tool.definitions)

Array of available tools in OpenAI function format:

[
  {
    "type": "function",
    "name": "get_weather",
    "description": "Get current weather for a city",
    "parameters": {
      "type": "object",
      "properties": {
        "city": {"type": "string", "description": "City name"},
        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
      },
      "required": ["city"]
    }
  }
]

Error Handling

When an error occurs, set error.type:

ValueDescription
_OTHERFallback error value
CustomSpecific error type (e.g., "ratelimit", "invalidapi_key")

Julia Types

Logfire.jl provides Julia types for all message formats in src/types.jl:

using Logfire: TextPart, ToolCallRequestPart, ToolCallResponsePart
using Logfire: InputMessage, OutputMessage
using Logfire: ROLE_USER, ROLE_ASSISTANT, FINISH_STOP

# Create a text message
msg = InputMessage(ROLE_USER, [TextPart("Hello!")])

# Create a tool call response
response = OutputMessage(
    ROLE_ASSISTANT,
    [TextPart("The weather is sunny.")],
    FINISH_STOP
)

Usage with PromptingTools

When using the LogfireSchema wrapper, messages are automatically converted:

using DotEnv
DotEnv.load!()  # Load .env file (must call explicitly)

using Logfire
using PromptingTools

Logfire.configure()
Logfire.instrument_promptingtools!()

# Messages are automatically traced with OTEL GenAI attributes
response = aigenerate("What is 2+2?")

The tracer extracts:

  • System messages → gen_ai.system_instructions
  • Conversation history → gen_ai.input.messages
  • Model response → gen_ai.output.messages
  • Tool definitions (if using aitools) → gen_ai.tool.definitions

Tool Calls Example

using DotEnv
DotEnv.load!()

using Logfire
using PromptingTools
import PromptingTools as PT

Logfire.configure()
Logfire.instrument_promptingtools!()

# Define tools
"Get weather for a city"
get_weather(city::String) = "22°C, sunny"

"Get current time for a city"
get_time(city::String) = "3:45 PM"

tools = [get_weather, get_time]
tool_map = PT.tool_call_signature(tools)

# Multi-turn conversation with tools
conv = aitools("What's the weather in Paris?"; tools, model="gpt4om", return_all=true)

# Execute tool calls
if conv[end] isa PT.AIToolRequest
    for tc in conv[end].tool_calls
        tc.content = string(PT.execute_tool(tool_map, tc))
        push!(conv, tc)
    end
end

# Get final response
resp = aigenerate(conv; model="gpt4om")
push!(conv, resp)

Julia API Reference

Creating Messages Manually

using Logfire

# Create a user message with text
user_msg = InputMessage(ROLE_USER, [TextPart("What's the weather?")])

# Create an assistant response with tool call
assistant_msg = InputMessage(ROLE_ASSISTANT, [
    ToolCallRequestPart("get_weather"; id="call_123", arguments=Dict("city" => "Paris"))
])

# Create a tool response (uses ROLE_USER for Logfire compatibility)
tool_msg = InputMessage(ROLE_USER, [
    ToolCallResponsePart("22°C, sunny"; id="call_123", name="get_weather")
])

# Create output message with finish reason
output = OutputMessage(ROLE_ASSISTANT, [TextPart("The weather is sunny.")], FINISH_STOP)

# Serialize to JSON
json_input = messages_to_json([user_msg, assistant_msg, tool_msg])
json_output = messages_to_json([output])

Creating Tool Definitions

using Logfire

# Create tool definition manually
tool = ToolDefinition(
    "get_weather";
    description="Get current weather for a city",
    parameters=Dict{String,Any}(
        "type" => "object",
        "properties" => Dict(
            "city" => Dict("type" => "string", "description" => "City name"),
            "unit" => Dict("type" => "string", "enum" => ["celsius", "fahrenheit"])
        ),
        "required" => ["city"]
    )
)

# Serialize to JSON
json = tool_definitions_to_json([tool])

Converting PromptingTools Messages

using Logfire
using PromptingTools as PT

# Convert a PT conversation to OTEL format
conv = [
    PT.SystemMessage("You are helpful"),
    PT.UserMessage("Hello"),
    PT.AIMessage("Hi there!")
]

# Convert with system message extraction
result = pt_conversation_to_otel(conv; separate_system=true)

# Access converted messages
println(result.system_instructions)  # Array of TextPart
println(result.input_messages)       # Array of InputMessage
println(result.output_messages)      # Array of OutputMessage

Setting Span Attributes

using Logfire
using OpenTelemetryAPI

# Create a span
span = create_span("gen_ai.chat", tracer("myapp"))

# Set messages from a PT conversation
set_genai_messages!(span, conv)

# Set tool definitions
set_tool_definitions!(span, tool_map)

# End span
end_span!(span)