Skip to content

Reference

# PromptingTools.ALLOWED_PREFERENCESConstant.

Keys that are allowed to be set via set_preferences!

source


# PromptingTools.ALTERNATIVE_GENERATION_COSTSConstant.
julia
ALTERNATIVE_GENERATION_COSTS

Tracker of alternative costing models, eg, for image generation (dall-e-3), the cost is driven by quality/size.

source


# PromptingTools.ANTHROPIC_TOOL_PROMPTConstant.

Simple template to add to the System Message when doing data extraction with Anthropic models.

It has 2 placeholders: tool_name, tool_description and tool_parameters that are filled with the tool's name, description and parameters. Source: https://docs.anthropic.com/claude/docs/functions-external-tools

source


# PromptingTools.BETA_HEADERS_ANTHROPICConstant.
julia
BETA_HEADERS_ANTHROPIC

A vector of symbols representing the beta features to be used.

Allowed:

  • :tools: Enables tools in the conversation.

  • :cache: Enables prompt caching.

  • :long_output: Enables long outputs (up to 8K tokens) with Anthropic's Sonnet 3.5.

  • :computer_use: Enables the use of the computer tool.

source


# PromptingTools.CONV_HISTORYConstant.
julia
CONV_HISTORY

Tracks the most recent conversations through the ai_str macros.

Preference available: MAX_HISTORY_LENGTH, which sets how many last messages should be remembered.

See also: push_conversation!, resize_conversation!

source


# PromptingTools.MODEL_ALIASESConstant.
julia
MODEL_ALIASES

A dictionary of model aliases. Aliases are used to refer to models by their aliases instead of their full names to make it more convenient to use them.

Accessing the aliases

PromptingTools.MODEL_ALIASES["gpt3"]

Register a new model alias

julia
PromptingTools.MODEL_ALIASES["gpt3"] = "gpt-3.5-turbo"

source


# PromptingTools.MODEL_REGISTRYConstant.
julia
MODEL_REGISTRY

A store of available model names and their specs (ie, name, costs per token, etc.)

Accessing the registry

You can use both the alias name or the full name to access the model spec:

PromptingTools.MODEL_REGISTRY["gpt-3.5-turbo"]

Registering a new model

julia
register_model!(
    name = "gpt-3.5-turbo",
    schema = :OpenAISchema,
    cost_of_token_prompt = 0.0015,
    cost_of_token_generation = 0.002,
    description = "GPT-3.5 Turbo is a 175B parameter model and a common default on the OpenAI API.")

Registering a model alias

julia
PromptingTools.MODEL_ALIASES["gpt3"] = "gpt-3.5-turbo"

source


# PromptingTools.OPENAI_TOKEN_IDS_GPT35_GPT4Constant.

Token IDs for GPT3.5 and GPT4 from https://platform.openai.com/tokenizer

source


# PromptingTools.PREFERENCESConstant.
julia
PREFERENCES

You can set preferences for PromptingTools by setting environment variables or by using the set_preferences!. It will create a LocalPreferences.toml file in your current directory and will reload your prefences from there.

Check your preferences by calling get_preferences(key::String).

Available Preferences (for set_preferences!)

  • OPENAI_API_KEY: The API key for the OpenAI API. See OpenAI's documentation for more information.

  • AZURE_OPENAI_API_KEY: The API key for the Azure OpenAI API. See Azure OpenAI's documentation for more information.

  • AZURE_OPENAI_HOST: The host for the Azure OpenAI API. See Azure OpenAI's documentation for more information.

  • MISTRAL_API_KEY: The API key for the Mistral AI API. See Mistral AI's documentation for more information.

  • COHERE_API_KEY: The API key for the Cohere API. See Cohere's documentation for more information.

  • DATABRICKS_API_KEY: The API key for the Databricks Foundation Model API. See Databricks' documentation for more information.

  • DATABRICKS_HOST: The host for the Databricks API. See Databricks' documentation for more information.

  • TAVILY_API_KEY: The API key for the Tavily Search API. Register here. See more information here.

  • GOOGLE_API_KEY: The API key for Google Gemini models. Get yours from here. If you see a documentation page ("Available languages and regions for Google AI Studio and Gemini API"), it means that it's not yet available in your region.

  • ANTHROPIC_API_KEY: The API key for the Anthropic API. Get yours from here.

  • VOYAGE_API_KEY: The API key for the Voyage API. Free tier is upto 50M tokens! Get yours from here.

  • GROQ_API_KEY: The API key for the Groq API. Free in beta! Get yours from here.

  • DEEPSEEK_API_KEY: The API key for the DeepSeek API. Get 5 credit when you join. Get yours from here.

  • OPENROUTER_API_KEY: The API key for the OpenRouter API. Get yours from here.

  • CEREBRAS_API_KEY: The API key for the Cerebras API. Get yours from here.

  • SAMBANOVA_API_KEY: The API key for the Sambanova API. Get yours from here.

  • XAI_API_KEY: The API key for the XAI API. Get your key from here.

  • MODEL_CHAT: The default model to use for aigenerate and most ai* calls. See MODEL_REGISTRY for a list of available models or define your own.

  • MODEL_EMBEDDING: The default model to use for aiembed (embedding documents). See MODEL_REGISTRY for a list of available models or define your own.

  • PROMPT_SCHEMA: The default prompt schema to use for aigenerate and most ai* calls (if not specified in MODEL_REGISTRY). Set as a string, eg, "OpenAISchema". See PROMPT_SCHEMA for more information.

  • MODEL_ALIASES: A dictionary of model aliases (alias => full_model_name). Aliases are used to refer to models by their aliases instead of their full names to make it more convenient to use them. See MODEL_ALIASES for more information.

  • MAX_HISTORY_LENGTH: The maximum length of the conversation history. Defaults to 5. Set to nothing to disable history. See CONV_HISTORY for more information.

  • LOCAL_SERVER: The URL of the local server to use for ai* calls. Defaults to http://localhost:10897/v1. This server is called when you call model="local" See ?LocalServerOpenAISchema for more information and examples.

  • LOG_DIR: The directory to save the logs to, eg, when using SaverSchema <: AbstractTracerSchema. Defaults to joinpath(pwd(), "log"). Refer to ?SaverSchema for more information on how it works and examples.

At the moment it is not possible to persist changes to MODEL_REGISTRY across sessions. Define your register_model!() calls in your startup.jl file to make them available across sessions or put them at the top of your script.

Available ENV Variables

  • OPENAI_API_KEY: The API key for the OpenAI API.

  • AZURE_OPENAI_API_KEY: The API key for the Azure OpenAI API.

  • AZURE_OPENAI_HOST: The host for the Azure OpenAI API. This is the URL built as https://<resource-name>.openai.azure.com.

  • MISTRAL_API_KEY: The API key for the Mistral AI API.

  • COHERE_API_KEY: The API key for the Cohere API.

  • LOCAL_SERVER: The URL of the local server to use for ai* calls. Defaults to http://localhost:10897/v1. This server is called when you call model="local"

  • DATABRICKS_API_KEY: The API key for the Databricks Foundation Model API.

  • DATABRICKS_HOST: The host for the Databricks API.

  • TAVILY_API_KEY: The API key for the Tavily Search API. Register here. See more information here.

  • GOOGLE_API_KEY: The API key for Google Gemini models. Get yours from here. If you see a documentation page ("Available languages and regions for Google AI Studio and Gemini API"), it means that it's not yet available in your region.

  • ANTHROPIC_API_KEY: The API key for the Anthropic API. Get yours from here.

  • VOYAGE_API_KEY: The API key for the Voyage API. Free tier is upto 50M tokens! Get yours from here.

  • GROQ_API_KEY: The API key for the Groq API. Free in beta! Get yours from here.

  • DEEPSEEK_API_KEY: The API key for the DeepSeek API. Get 5 credit when you join. Get yours from here.

  • OPENROUTER_API_KEY: The API key for the OpenRouter API. Get yours from here.

  • CEREBRAS_API_KEY: The API key for the Cerebras API.

  • SAMBANOVA_API_KEY: The API key for the Sambanova API.

  • LOG_DIR: The directory to save the logs to, eg, when using SaverSchema <: AbstractTracerSchema. Defaults to joinpath(pwd(), "log"). Refer to ?SaverSchema for more information on how it works and examples.

  • XAI_API_KEY: The API key for the XAI API. Get your key from here.

Preferences.jl takes priority over ENV variables, so if you set a preference, it will take precedence over the ENV variable.

WARNING: NEVER EVER sync your LocalPreferences.toml file! It contains your API key and other sensitive information!!!

source


# PromptingTools.RESERVED_KWARGSConstant.

The following keywords are reserved for internal use in the ai* functions and cannot be used as placeholders in the Messages

source


# PromptingTools.AICodeType.
julia
AICode(code::AbstractString; auto_eval::Bool=true, safe_eval::Bool=false, 
skip_unsafe::Bool=false, capture_stdout::Bool=true, verbose::Bool=false,
prefix::AbstractString="", suffix::AbstractString="", remove_tests::Bool=false, execution_timeout::Int = 60)

AICode(msg::AIMessage; auto_eval::Bool=true, safe_eval::Bool=false, 
skip_unsafe::Bool=false, skip_invalid::Bool=false, capture_stdout::Bool=true,
verbose::Bool=false, prefix::AbstractString="", suffix::AbstractString="", remove_tests::Bool=false, execution_timeout::Int = 60)

A mutable structure representing a code block (received from the AI model) with automatic parsing, execution, and output/error capturing capabilities.

Upon instantiation with a string, the AICode object automatically runs a code parser and executor (via PromptingTools.eval!()), capturing any standard output (stdout) or errors. This structure is useful for programmatically handling and evaluating Julia code snippets.

See also: PromptingTools.extract_code_blocks, PromptingTools.eval!

Workflow

  • Until cb::AICode has been evaluated, cb.success is set to nothing (and so are all other fields).

  • The text in cb.code is parsed (saved to cb.expression).

  • The parsed expression is evaluated.

  • Outputs of the evaluated expression are captured in cb.output.

  • Any stdout outputs (e.g., from println) are captured in cb.stdout.

  • If an error occurs during evaluation, it is saved in cb.error.

  • After successful evaluation without errors, cb.success is set to true. Otherwise, it is set to false and you can inspect the cb.error to understand why.

Properties

  • code::AbstractString: The raw string of the code to be parsed and executed.

  • expression: The parsed Julia expression (set after parsing code).

  • stdout: Captured standard output from the execution of the code.

  • output: The result of evaluating the code block.

  • success::Union{Nothing, Bool}: Indicates whether the code block executed successfully (true), unsuccessfully (false), or has yet to be evaluated (nothing).

  • error::Union{Nothing, Exception}: Any exception raised during the execution of the code block.

Keyword Arguments

  • auto_eval::Bool: If set to true, the code block is automatically parsed and evaluated upon instantiation. Defaults to true.

  • safe_eval::Bool: If set to true, the code block checks for package operations (e.g., installing new packages) and missing imports, and then evaluates the code inside a bespoke scratch module. This is to ensure that the evaluation does not alter any user-defined variables or the global state. Defaults to false.

  • skip_unsafe::Bool: If set to true, we skip any lines in the code block that are deemed unsafe (eg, Pkg operations). Defaults to false.

  • skip_invalid::Bool: If set to true, we skip code blocks that do not even parse. Defaults to false.

  • verbose::Bool: If set to true, we print out any lines that are skipped due to being unsafe. Defaults to false.

  • capture_stdout::Bool: If set to true, we capture any stdout outputs (eg, test failures) in cb.stdout. Defaults to true.

  • prefix::AbstractString: A string to be prepended to the code block before parsing and evaluation. Useful to add some additional code definition or necessary imports. Defaults to an empty string.

  • suffix::AbstractString: A string to be appended to the code block before parsing and evaluation. Useful to check that tests pass or that an example executes. Defaults to an empty string.

  • remove_tests::Bool: If set to true, we remove any @test or @testset macros from the code block before parsing and evaluation. Defaults to false.

  • execution_timeout::Int: The maximum time (in seconds) allowed for the code block to execute. Defaults to 60 seconds.

Methods

  • Base.isvalid(cb::AICode): Check if the code block has executed successfully. Returns true if cb.success == true.

Examples

julia
code = AICode("println("Hello, World!")") # Auto-parses and evaluates the code, capturing output and errors.
isvalid(code) # Output: true
code.stdout # Output: "Hello, World!
"

We try to evaluate "safely" by default (eg, inside a custom module, to avoid changing user variables). You can avoid that with save_eval=false:

julia
code = AICode("new_variable = 1"; safe_eval=false)
isvalid(code) # Output: true
new_variable # Output: 1

You can also call AICode directly on an AIMessage, which will extract the Julia code blocks, concatenate them and evaluate them:

julia
msg = aigenerate("In Julia, how do you create a vector of 10 random numbers?")
code = AICode(msg)
# Output: AICode(Success: True, Parsed: True, Evaluated: True, Error Caught: N/A, StdOut: True, Code: 2 Lines)

# show the code
code.code |> println
# Output: 
# numbers = rand(10)
# numbers = rand(1:100, 10)

# or copy it to the clipboard
code.code |> clipboard

# or execute it in the current module (=Main)
eval(code.expression)

source


# PromptingTools.AIMessageType.
julia
AIMessage

A message type for AI-generated text-based responses. Returned by aigenerate, aiclassify, and aiscan functions.

Fields

  • content::Union{AbstractString, Nothing}: The content of the message.

  • status::Union{Int, Nothing}: The status of the message from the API.

  • name::Union{Nothing, String}: The name of the role in the conversation.

  • tokens::Tuple{Int, Int}: The number of tokens used (prompt,completion).

  • elapsed::Float64: The time taken to generate the response in seconds.

  • cost::Union{Nothing, Float64}: The cost of the API call (calculated with information from MODEL_REGISTRY).

  • log_prob::Union{Nothing, Float64}: The log probability of the response.

  • extras::Union{Nothing, Dict{Symbol, Any}}: A dictionary for additional metadata that is not part of the key message fields. Try to limit to a small number of items and singletons to be serializable.

  • finish_reason::Union{Nothing, String}: The reason the response was finished.

  • run_id::Union{Nothing, Int}: The unique ID of the run.

  • sample_id::Union{Nothing, Int}: The unique ID of the sample (if multiple samples are generated, they will all have the same run_id).

source


# PromptingTools.AITemplateType.
julia
AITemplate

AITemplate is a template for a conversation prompt. This type is merely a container for the template name, which is resolved into a set of messages (=prompt) by render.

Naming Convention

  • Template names should be in CamelCase

  • Follow the format <Persona>...<Variable>... where possible, eg, JudgeIsItTrue, ``

    • Starting with the Persona (=System prompt), eg, Judge = persona is meant to judge some provided information

    • Variable to be filled in with context, eg, It = placeholder it

    • Ending with the variable name is helpful, eg, JuliaExpertTask for a persona to be an expert in Julia language and task is the placeholder name

  • Ideally, the template name should be self-explanatory, eg, JudgeIsItTrue = persona is meant to judge some provided information where it is true or false

Examples

Save time by re-using pre-made templates, just fill in the placeholders with the keyword arguments:

julia
msg = aigenerate(:JuliaExpertAsk; ask = "How do I add packages?")

The above is equivalent to a more verbose version that explicitly uses the dispatch on AITemplate:

julia
msg = aigenerate(AITemplate(:JuliaExpertAsk); ask = "How do I add packages?")

Find available templates with aitemplates:

julia
tmps = aitemplates("JuliaExpertAsk")
# Will surface one specific template
# 1-element Vector{AITemplateMetadata}:
# PromptingTools.AITemplateMetadata
#   name: Symbol JuliaExpertAsk
#   description: String "For asking questions about Julia language. Placeholders: `ask`"
#   version: String "1"
#   wordcount: Int64 237
#   variables: Array{Symbol}((1,))
#   system_preview: String "You are a world-class Julia language programmer with the knowledge of the latest syntax. Your commun"
#   user_preview: String "# Question

{{ask}}"
#   source: String ""

The above gives you a good idea of what the template is about, what placeholders are available, and how much it would cost to use it (=wordcount).

Search for all Julia-related templates:

julia
tmps = aitemplates("Julia")
# 2-element Vector{AITemplateMetadata}... -> more to come later!

If you are on VSCode, you can leverage nice tabular display with vscodedisplay:

julia
using DataFrames
tmps = aitemplates("Julia") |> DataFrame |> vscodedisplay

I have my selected template, how do I use it? Just use the "name" in aigenerate or aiclassify like you see in the first example!

You can inspect any template by "rendering" it (this is what the LLM will see):

julia
julia> AITemplate(:JudgeIsItTrue) |> PromptingTools.render

See also: save_template, load_template, load_templates! for more advanced use cases (and the corresponding script in examples/ folder)

source


# PromptingTools.AITemplateMetadataType.

Helper for easy searching and reviewing of templates. Defined on loading of each template.

source


# PromptingTools.AIToolRequestType.
julia
AIToolRequest

A message type for AI-generated tool requests. Returned by aitools functions.

Fields

  • content::Union{AbstractString, Nothing}: The content of the message.

  • tool_calls::Vector{ToolMessage}: The vector of tool call requests.

  • name::Union{Nothing, String}: The name of the role in the conversation.

  • status::Union{Int, Nothing}: The status of the message from the API.

  • tokens::Tuple{Int, Int}: The number of tokens used (prompt,completion).

  • elapsed::Float64: The time taken to generate the response in seconds.

  • cost::Union{Nothing, Float64}: The cost of the API call (calculated with information from MODEL_REGISTRY).

  • log_prob::Union{Nothing, Float64}: The log probability of the response.

  • extras::Union{Nothing, Dict{Symbol, Any}}: A dictionary for additional metadata that is not part of the key message fields. Try to limit to a small number of items and singletons to be serializable.

  • finish_reason::Union{Nothing, String}: The reason the response was finished.

  • run_id::Union{Nothing, Int}: The unique ID of the run.

  • sample_id::Union{Nothing, Int}: The unique ID of the sample (if multiple samples are generated, they will all have the same run_id).

See ToolMessage for the fields of the tool call requests.

See also: tool_calls, execute_tool, parse_tool

source


# PromptingTools.AbstractAnnotationMessageType.
julia
AbstractAnnotationMessage

Messages that provide extra information without being sent to LLMs.

Required fields: content, tags, comment, run_id.

Note: comment is intended for human readers only and should never be used. run_id should be a unique identifier for the annotation, typically a random number.

source


# PromptingTools.AbstractPromptSchemaType.

Defines different prompting styles based on the model training and fine-tuning.

source


# PromptingTools.AbstractToolType.
julia
AbstractTool

Abstract type for all tool types.

Required fields:

  • name::String: The name of the tool.

  • parameters::Dict: The parameters of the tool.

  • description::Union{String, Nothing}: The description of the tool.

  • callable::Any: The callable object of the tool, eg, a type or a function.

source


# PromptingTools.AbstractToolErrorType.
julia
AbstractToolError

Abstract type for all tool errors.

Available subtypes:

source


# PromptingTools.AnnotationMessageType.
julia
AnnotationMessage

A message type for providing extra information in the conversation history without being sent to LLMs. These messages are filtered out during rendering to ensure they don't affect the LLM's context.

Used to bundle key information and documentation for colleagues and future reference together with the data.

Fields

  • content::T: The content of the annotation (can be used for inputs to airag etc.)

  • extras::Dict{Symbol,Any}: Additional metadata with symbol keys and any values

  • tags::Vector{Symbol}: Vector of tags for categorization (default: empty)

  • comment::String: Human-readable comment, never used for automatic operations (default: empty)

  • run_id::Union{Nothing,Int}: The unique ID of the annotation

Note: The comment field is intended for human readers only and should never be used for automatic operations.

source


# PromptingTools.AnthropicSchemaType.
julia
AnthropicSchema <: AbstractAnthropicSchema

AnthropicSchema is the default schema for Anthropic API models (eg, Claude). See more information here.

It uses the following conversation template:

Dict(role="user",content="..."),Dict(role="assistant",content="...")]

system messages are provided as a keyword argument to the API call.

It's recommended to separate sections in your prompt with XML markup (e.g. <document> </document>). See here.

source


# PromptingTools.AzureOpenAISchemaType.

AzureOpenAISchema

AzureOpenAISchema() allows user to call Azure OpenAI API. API Reference

Requires two environment variables to be set:

  • AZURE_OPENAI_API_KEY: Azure token

  • AZURE_OPENAI_HOST: Address of the Azure resource ("https://<resource>.openai.azure.com")

source


# PromptingTools.CerebrasOpenAISchemaType.
julia
CerebrasOpenAISchema

Schema to call the Cerebras API.

Links:

Requires one environment variable to be set:

  • CEREBRAS_API_KEY: Your API key

source


# PromptingTools.ChatMLSchemaType.

ChatMLSchema is used by many open-source chatbots, by OpenAI models (under the hood) and by several models and inferfaces (eg, Ollama, vLLM)

You can explore it on tiktokenizer

It uses the following conversation structure:

<im_start>system
...<im_end>
<|im_start|>user
...<|im_end|>
<|im_start|>assistant
...<|im_end|>

source


# PromptingTools.ConversationMemoryType.
julia
ConversationMemory

A structured container for managing conversation history. It has only one field :conversation which is a vector of AbstractMessages. It's built to support intelligent truncation and caching behavior (get_last).

You can also use it as a functor to have extended conversations (easier than constantly passing conversation kwarg)

Examples

Basic usage

julia
mem = ConversationMemory()
push!(mem, SystemMessage("You are a helpful assistant"))
push!(mem, UserMessage("Hello!"))
push!(mem, AIMessage("Hi there!"))

# or simply
mem = ConversationMemory(conv)

Check memory stats

julia
println(mem)  # ConversationMemory(2 messages) - doesn't count system message
@show length(mem)  # 3 - counts all messages
@show last_message(mem)  # gets last message
@show last_output(mem)   # gets last content

Get recent messages with different options (System message, User message, ... + the most recent)

julia
recent = get_last(mem, 5)  # get last 5 messages (including system)
recent = get_last(mem, 20, batch_size=10)  # align to batches of 10 for caching
recent = get_last(mem, 5, explain=true)    # adds truncation explanation
recent = get_last(mem, 5, verbose=true)    # prints truncation info

Append multiple messages at once (with deduplication to keep the memory complete)

julia
msgs = [
    UserMessage("How are you?"),
    AIMessage("I'm good!"; run_id=1),
    UserMessage("Great!"),
    AIMessage("Indeed!"; run_id=2)
]
append!(mem, msgs)  # Will only append new messages based on run_ids etc.

Use for AI conversations (easier to manage conversations)

julia
response = mem("Tell me a joke"; model="gpt4o")  # Automatically manages context
response = mem("Another one"; last=3, model="gpt4o")  # Use only last 3 messages (uses `get_last`)

# Direct generation from the memory
result = aigenerate(mem)  # Generate using full context

source


# PromptingTools.ConversationMemoryMethod.
julia
(mem::ConversationMemory)(prompt::AbstractString; last::Union{Nothing,Integer}=nothing, kwargs...)

Functor interface for direct generation using the conversation memory. Optionally, specify the number of last messages to include in the context (uses get_last).

source


# PromptingTools.CustomOpenAISchemaType.
julia
CustomOpenAISchema

CustomOpenAISchema() allows user to call any OpenAI-compatible API.

All user needs to do is to pass this schema as the first argument and provide the BASE URL of the API to call (api_kwargs.url).

Example

Assumes that we have a local server running at http://127.0.0.1:8081:

julia
api_key = "..."
prompt = "Say hi!"
msg = aigenerate(CustomOpenAISchema(), prompt; model="my_model", api_key, api_kwargs=(; url="http://127.0.0.1:8081"))

source


# PromptingTools.DataMessageType.
julia
DataMessage

A message type for AI-generated data-based responses, ie, different content than text. Returned by aiextract, and aiextract functions.

Fields

  • content::Union{AbstractString, Nothing}: The content of the message.

  • status::Union{Int, Nothing}: The status of the message from the API.

  • tokens::Tuple{Int, Int}: The number of tokens used (prompt,completion).

  • elapsed::Float64: The time taken to generate the response in seconds.

  • cost::Union{Nothing, Float64}: The cost of the API call (calculated with information from MODEL_REGISTRY).

  • log_prob::Union{Nothing, Float64}: The log probability of the response.

  • extras::Union{Nothing, Dict{Symbol, Any}}: A dictionary for additional metadata that is not part of the key message fields. Try to limit to a small number of items and singletons to be serializable.

  • finish_reason::Union{Nothing, String}: The reason the response was finished.

  • run_id::Union{Nothing, Int}: The unique ID of the run.

  • sample_id::Union{Nothing, Int}: The unique ID of the sample (if multiple samples are generated, they will all have the same run_id).

source


# PromptingTools.DatabricksOpenAISchemaType.
julia
DatabricksOpenAISchema

DatabricksOpenAISchema() allows user to call Databricks Foundation Model API. API Reference

Requires two environment variables to be set:

  • DATABRICKS_API_KEY: Databricks token

  • DATABRICKS_HOST: Address of the Databricks workspace (https://<workspace_host>.databricks.com)

source


# PromptingTools.DeepSeekOpenAISchemaType.
julia
DeepSeekOpenAISchema

Schema to call the DeepSeek API.

Links:

Requires one environment variables to be set:

  • DEEPSEEK_API_KEY: Your API key (often starts with "sk-...")

source


# PromptingTools.FireworksOpenAISchemaType.
julia
FireworksOpenAISchema

Schema to call the Fireworks.ai API.

Links:

Requires one environment variables to be set:

  • FIREWORKS_API_KEY: Your API key

source


# PromptingTools.GoogleOpenAISchemaType.
julia
GoogleOpenAISchema

Schema to call the Google's Gemini API using OpenAI compatibility mode. API Reference

Links:

Requires one environment variable to be set:

  • GOOGLE_API_KEY: Your API key

The base URL for the API is "https://generativelanguage.googleapis.com/v1beta"

Warning: Token counting and cost counting have not yet been implemented by Google, so you'll not have any such metrics. If you need it, use the native GoogleSchema with the GoogleGenAI.jl library.

source


# PromptingTools.GoogleSchemaType.

Calls Google's Gemini API. See more information here. It's available only for some regions.

source


# PromptingTools.GroqOpenAISchemaType.
julia
GroqOpenAISchema

Schema to call the groq.com API.

Links:

Requires one environment variables to be set:

  • GROQ_API_KEY: Your API key (often starts with "gsk_...")

source


# PromptingTools.ItemsExtractType.

Extract zero, one or more specified items from the provided data.

source


# PromptingTools.LocalServerOpenAISchemaType.
julia
LocalServerOpenAISchema

Designed to be used with local servers. It's automatically called with model alias "local" (see MODEL_REGISTRY).

This schema is a flavor of CustomOpenAISchema with a url keypreset by global Preference keyLOCAL_SERVER. See?PREFERENCESfor more details on how to change it. It assumes that the server follows OpenAI API conventions (eg,POST /v1/chat/completions`).

Note: Llama.cpp (and hence Llama.jl built on top of it) do NOT support embeddings endpoint! You'll get an address error.

Example

Assumes that we have a local server running at http://127.0.0.1:10897/v1 (port and address used by Llama.jl, "v1" at the end is needed for OpenAI endpoint compatibility):

Three ways to call it:

julia

# Use @ai_str with "local" alias
ai"Say hi!"local

# model="local"
aigenerate("Say hi!"; model="local")

# Or set schema explicitly
const PT = PromptingTools
msg = aigenerate(PT.LocalServerOpenAISchema(), "Say hi!")

How to start a LLM local server? You can use run_server function from Llama.jl. Use a separate Julia session.

julia
using Llama
model = "...path..." # see Llama.jl README how to download one
run_server(; model)

To change the default port and address:

julia
# For a permanent change, set the preference:
using Preferences
set_preferences!("LOCAL_SERVER"=>"http://127.0.0.1:10897/v1")

# Or if it's a temporary fix, just change the variable `LOCAL_SERVER`:
const PT = PromptingTools
PT.LOCAL_SERVER = "http://127.0.0.1:10897/v1"

source


# PromptingTools.MaybeExtractType.

Extract a result from the provided data, if any, otherwise set the error and message fields.

Arguments

  • error::Bool: true if a result is found, false otherwise.

  • message::String: Only present if no result is found, should be short and concise.

source


# PromptingTools.MistralOpenAISchemaType.
julia
MistralOpenAISchema

MistralOpenAISchema() allows user to call MistralAI API known for mistral and mixtral models.

It's a flavor of CustomOpenAISchema() with a url preset to https://api.mistral.ai.

Most models have been registered, so you don't even have to specify the schema

Example

Let's call mistral-tiny model:

julia
api_key = "..." # can be set via ENV["MISTRAL_API_KEY"] or via our preference system
msg = aigenerate("Say hi!"; model="mistral_tiny", api_key)

See ?PREFERENCES for more details on how to set your API key permanently.

source


# PromptingTools.ModelSpecType.
julia
ModelSpec

A struct that contains information about a model, such as its name, schema, cost per token, etc.

Fields

  • name::String: The name of the model. This is the name that will be used to refer to the model in the ai* functions.

  • schema::AbstractPromptSchema: The schema of the model. This is the schema that will be used to generate prompts for the model, eg, :OpenAISchema.

  • cost_of_token_prompt::Float64: The cost of 1 token in the prompt for this model. This is used to calculate the cost of a prompt. Note: It is often provided online as cost per 1000 tokens, so make sure to convert it correctly!

  • cost_of_token_generation::Float64: The cost of 1 token generated by this model. This is used to calculate the cost of a generation. Note: It is often provided online as cost per 1000 tokens, so make sure to convert it correctly!

  • description::String: A description of the model. This is used to provide more information about the model when it is queried.

Example

julia
spec = ModelSpec("gpt-3.5-turbo",
    OpenAISchema(),
    0.0015,
    0.002,
    "GPT-3.5 Turbo is a 175B parameter model and a common default on the OpenAI API.")

# register it
PromptingTools.register_model!(spec)

But you can also register any model directly via keyword arguments:

julia
PromptingTools.register_model!(
    name = "gpt-3.5-turbo",
    schema = OpenAISchema(),
    cost_of_token_prompt = 0.0015,
    cost_of_token_generation = 0.002,
    description = "GPT-3.5 Turbo is a 175B parameter model and a common default on the OpenAI API.")

source


# PromptingTools.NoSchemaType.

Schema that keeps messages (<:AbstractMessage) and does not transform for any specific model. It used by the first pass of the prompt rendering system (see ?render).

source


# PromptingTools.OllamaManagedSchemaType.

Ollama by default manages different models and their associated prompt schemas when you pass system_prompt and prompt fields to the API.

Warning: It works only for 1 system message and 1 user message, so anything more than that has to be rejected.

If you need to pass more messagese / longer conversational history, you can use define the model-specific schema directly and pass your Ollama requests with raw=true, which disables and templating and schema management by Ollama.

source


# PromptingTools.OllamaSchemaType.

OllamaSchema is the default schema for Olama models.

It uses the following conversation template:

[Dict(role="system",content="..."),Dict(role="user",content="..."),Dict(role="assistant",content="...")]

It's very similar to OpenAISchema, but it appends images differently.

source


# PromptingTools.OpenAISchemaType.

OpenAISchema is the default schema for OpenAI models.

It uses the following conversation template:

[Dict(role="system",content="..."),Dict(role="user",content="..."),Dict(role="assistant",content="...")]

It's recommended to separate sections in your prompt with markdown headers (e.g. `##Answer

`).

source


# PromptingTools.OpenRouterOpenAISchemaType.
julia
OpenRouterOpenAISchema

Schema to call the OpenRouter API.

Links:

Requires one environment variable to be set:

  • OPENROUTER_API_KEY: Your API key

source


# PromptingTools.SambaNovaOpenAISchemaType.
julia
SambaNovaOpenAISchema

Schema to call the SambaNova API.

Links:

Requires one environment variable to be set:

  • SAMBANOVA_API_KEY: Your API key

source


# PromptingTools.SaverSchemaType.
julia
SaverSchema <: AbstractTracerSchema

SaverSchema is a schema that automatically saves the conversation to the disk. It's useful for debugging and for persistent logging.

It can be composed with any other schema, eg, TracerSchema to save additional metadata.

Set environment variable LOG_DIR to the directory where you want to save the conversation (see ?PREFERENCES). Conversations are named by the hash of the first message in the conversation to naturally group subsequent conversations together.

If you need to provide logging directory of the file name dynamically, you can provide the following arguments to tracer_kwargs:

  • log_dir - used as the directory to save the log into when provided. Defaults to LOG_DIR if not provided.

  • log_file_path - used as the file name to save the log into when provided. This value overrules the log_dir and LOG_DIR if provided.

To use it automatically, re-register the models you use with the schema wrapped in SaverSchema

See also: meta, unwrap, TracerSchema, initialize_tracer, finalize_tracer

Example

julia
using PromptingTools: TracerSchema, OpenAISchema, SaverSchema
# This schema will first trace the metadata (change to TraceMessage) and then save the conversation to the disk

wrap_schema = OpenAISchema() |> TracerSchema |> SaverSchema
conv = aigenerate(wrap_schema,:BlankSystemUser; system="You're a French-speaking assistant!",
    user="Say hi!", model="gpt-4", api_kwargs=(;temperature=0.1), return_all=true)

# conv is a vector of messages that will be saved to a JSON together with metadata about the template and api_kwargs

If you wanted to enable this automatically for models you use, you can do it like this:

julia
PT.register_model!(; name= "gpt-3.5-turbo", schema=OpenAISchema() |> TracerSchema |> SaverSchema)

Any subsequent calls model="gpt-3.5-turbo" will automatically capture metadata and save the conversation to the disk.

To provide logging file path explicitly, use the tracer_kwargs:

julia
conv = aigenerate(wrap_schema,:BlankSystemUser; system="You're a French-speaking assistant!",
    user="Say hi!", model="gpt-4", api_kwargs=(;temperature=0.1), return_all=true,
    tracer_kwargs=(; log_file_path="my_logs/my_log.json"))

source


# PromptingTools.ShareGPTSchemaType.
julia
ShareGPTSchema <: AbstractShareGPTSchema

Frequently used schema for finetuning LLMs. Conversations are recorded as a vector of dicts with keys from and value (similar to OpenAI).

source


# PromptingTools.TestEchoAnthropicSchemaType.

Echoes the user's input back to them. Used for testing the implementation

source


# PromptingTools.TestEchoGoogleSchemaType.

Echoes the user's input back to them. Used for testing the implementation

source


# PromptingTools.TestEchoOllamaManagedSchemaType.

Echoes the user's input back to them. Used for testing the implementation

source


# PromptingTools.TestEchoOllamaSchemaType.

Echoes the user's input back to them. Used for testing the implementation

source


# PromptingTools.TestEchoOpenAISchemaType.

Echoes the user's input back to them. Used for testing the implementation

source


# PromptingTools.TogetherOpenAISchemaType.
julia
TogetherOpenAISchema

Schema to call the Together.ai API.

Links:

Requires one environment variables to be set:

  • TOGETHER_API_KEY: Your API key

source


# PromptingTools.ToolType.
julia
Tool

A tool that can be sent to an LLM for execution ("function calling").

Arguments

  • name::String: The name of the tool.

  • parameters::Dict: The parameters of the tool.

  • description::Union{String, Nothing}: The description of the tool.

  • strict::Union{Bool, Nothing}: Whether to enforce strict mode for the tool.

  • callable::Any: The callable object of the tool, eg, a type or a function.

See also: AbstractTool, tool_call_signature

source


# PromptingTools.ToolMethod.
julia
Tool(callable::Union{Function, Type, Method}; kwargs...)

Create a Tool from a callable object (function, type, or method).

Arguments

  • callable::Union{Function, Type, Method}: The callable object to convert to a tool.

Returns

  • Tool: A tool object that can be used for function calling.

Examples

julia
# Create a tool from a function
tool = Tool(my_function)

# Create a tool from a type
tool = Tool(MyStruct)

source


# PromptingTools.ToolExecutionErrorType.

Error type for when a tool execution fails. It should contain the error message from the tool execution.

source


# PromptingTools.ToolGenericErrorType.

Error type for when a tool execution fails with a generic error. It should contain the detailed error message.

source


# PromptingTools.ToolMessageType.
julia
ToolMessage

A message type for tool calls.

It represents both the request (fields args, name) and the response (field content).

Fields

  • content::Any: The content of the message.

  • req_id::Union{Nothing, Int}: The unique ID of the request.

  • tool_call_id::String: The unique ID of the tool call.

  • raw::AbstractString: The raw JSON string of the tool call request.

  • args::Union{Nothing, Dict{Symbol, Any}}: The arguments of the tool call request.

  • name::Union{Nothing, String}: The name of the tool call request.

source


# PromptingTools.ToolNotFoundErrorType.

Error type for when a tool is not found. It should contain the tool name that was not found.

source


# PromptingTools.ToolRefType.
julia
ToolRef(ref::Symbol, callable::Any)

Represents a reference to a tool with a symbolic name and a callable object (to call during tool execution). It can be rendered with a render method and a prompt schema.

Arguments

  • ref::Symbol: The symbolic name of the tool.

  • callable::Any: The callable object of the tool, eg, a type or a function.

  • extras::Dict{String, Any}: Additional parameters to be included in the tool signature.

Examples

julia
# Define a tool with a symbolic name and a callable object
tool = ToolRef(;ref=:computer, callable=println)

# Show the rendered tool signature
PT.render(PT.AnthropicSchema(), tool)

source


# PromptingTools.TracerMessageType.
julia
TracerMessage{T <: Union{AbstractChatMessage, AbstractDataMessage}} <: AbstractTracerMessage

A mutable wrapper message designed for tracing the flow of messages through the system, allowing for iterative updates and providing additional metadata for observability.

Fields

  • object::T: The original message being traced, which can be either a chat or data message.

  • from::Union{Nothing, Symbol}: The identifier of the sender of the message.

  • to::Union{Nothing, Symbol}: The identifier of the intended recipient of the message.

  • viewers::Vector{Symbol}: A list of identifiers for entities that have access to view the message, in addition to the sender and recipient.

  • time_received::DateTime: The timestamp when the message was received by the tracing system.

  • time_sent::Union{Nothing, DateTime}: The timestamp when the message was originally sent, if available.

  • model::String: The name of the model that generated the message. Defaults to empty.

  • parent_id::Symbol: An identifier for the job or process that the message is associated with. Higher-level tracing ID.

  • thread_id::Symbol: An identifier for the thread (series of messages for one model/agent) or execution context within the job where the message originated. It should be the same for messages in the same thread.

  • meta::Union{Nothing, Dict{Symbol, Any}}: A dictionary for additional metadata that is not part of the message itself. Try to limit to a small number of items and singletons to be serializable.

  • _type::Symbol: A fixed symbol identifying the type of the message as :eventmessage, used for type discrimination.

This structure is particularly useful for debugging, monitoring, and auditing the flow of messages in systems that involve complex interactions or asynchronous processing.

All fields are optional besides the object.

Useful methods: pprint (pretty prints the underlying message), unwrap (to get the object out of tracer), align_tracer! (to set all shared IDs in a vector of tracers to the same), istracermessage to check if given message is an AbstractTracerMessage

Example

julia
wrap_schema = PT.TracerSchema(PT.OpenAISchema())
msg = aigenerate(wrap_schema, "Say hi!"; model = "gpt4t")
msg # isa TracerMessage
msg.content # access content like if it was the message

source


# PromptingTools.TracerMessageLikeType.
julia
TracerMessageLike{T <: Any} <: AbstractTracer

A mutable structure designed for general-purpose tracing within the system, capable of handling any type of object that is part of the AI Conversation. It provides a flexible way to track and annotate objects as they move through different parts of the system, facilitating debugging, monitoring, and auditing.

Fields

  • object::T: The original object being traced.

  • from::Union{Nothing, Symbol}: The identifier of the sender or origin of the object.

  • to::Union{Nothing, Symbol}: The identifier of the intended recipient or destination of the object.

  • viewers::Vector{Symbol}: A list of identifiers for entities that have access to view the object, in addition to the sender and recipient.

  • time_received::DateTime: The timestamp when the object was received by the tracing system.

  • time_sent::Union{Nothing, DateTime}: The timestamp when the object was originally sent, if available.

  • model::String: The name of the model or process that generated or is associated with the object. Defaults to empty.

  • parent_id::Symbol: An identifier for the job or process that the object is associated with. Higher-level tracing ID.

  • thread_id::Symbol: An identifier for the thread or execution context (sub-task, sub-process) within the job where the object originated. It should be the same for objects in the same thread.

  • run_id::Union{Nothing, Int}: A unique identifier for the run or instance of the process (ie, a single call to the LLM) that generated the object. Defaults to a random integer.

  • meta::Union{Nothing, Dict{Symbol, Any}}: A dictionary for additional metadata that is not part of the object itself. Try to limit to a small number of items and singletons to be serializable.

  • _type::Symbol: A fixed symbol identifying the type of the tracer as :tracermessage, used for type discrimination.

This structure is particularly useful for systems that involve complex interactions or asynchronous processing, where tracking the flow and transformation of objects is crucial.

All fields are optional besides the object.

source


# PromptingTools.TracerSchemaType.
julia
TracerSchema <: AbstractTracerSchema

A schema designed to wrap another schema, enabling pre- and post-execution callbacks for tracing and additional functionalities. This type is specifically utilized within the TracerMessage type to trace the execution flow, facilitating observability and debugging in complex conversational AI systems.

The TracerSchema acts as a middleware, allowing developers to insert custom logic before and after the execution of the primary schema's functionality. This can include logging, performance measurement, or any other form of tracing required to understand or improve the execution flow.

TracerSchema automatically wraps messages in TracerMessage type, which has several important fields, eg,

  • object: the original message - unwrap with utility unwrap

  • meta: a dictionary with metadata about the tracing process (eg, prompt templates, LLM API kwargs) - extract with utility meta

  • parent_id: an identifier for the overall job / high-level conversation with the user where the current conversation thread originated. It should be the same for objects in the same thread.

  • thread_id: an identifier for the current thread or execution context (sub-task, sub-process, CURRENT CONVERSATION or vector of messages) within the broader parent task. It should be the same for objects in the same thread.

See also: meta, unwrap, SaverSchema, initialize_tracer, finalize_tracer

Example

julia
wrap_schema = TracerSchema(OpenAISchema())
msg = aigenerate(wrap_schema, "Say hi!"; model="gpt-4")
# output type should be TracerMessage
msg isa TracerMessage

You can define your own tracer schema and the corresponding methods: initialize_tracer, finalize_tracer. See src/llm_tracer.jl

source


# PromptingTools.UserMessageType.
julia
UserMessage

A message type for user-generated text-based responses. Consumed by ai* functions to generate responses.

Fields

  • content::T: The content of the message.

  • variables::Vector{Symbol}: The variables in the message.

  • name::Union{Nothing, String}: The name of the role in the conversation.

source


# PromptingTools.UserMessageWithImagesType.
julia
UserMessageWithImages

A message type for user-generated text-based responses with images. Consumed by ai* functions to generate responses.

Fields

  • content::T: The content of the message.

  • image_url::Vector{String}: The URLs of the images.

  • variables::Vector{Symbol}: The variables in the message.

  • name::Union{Nothing, String}: The name of the role in the conversation.

source


# PromptingTools.UserMessageWithImagesMethod.

Construct UserMessageWithImages with 1 or more images. Images can be either URLs or local paths.

source


# PromptingTools.X123Type.

With docstring

source


# PromptingTools.XAIOpenAISchemaType.
julia
XAIOpenAISchema

Schema to call the XAI API. It follows OpenAI API conventions.

Get your API key from here.

Requires one environment variable to be set:

  • XAI_API_KEY: Your API key

source


# Base.append!Method.
julia
append!(mem::ConversationMemory, msgs::Vector{<:AbstractMessage})

Smart append that handles duplicate messages based on run IDs. Only appends messages that are newer than the latest matching message in memory.

source


# Base.lengthMethod.
julia
length(mem::ConversationMemory)

Return the number of messages. All of them.

source


# Base.push!Method.
julia
push!(mem::ConversationMemory, msg::AbstractMessage)

Add a single message to the conversation memory.

source


# Base.showMethod.
julia
show(io::IO, mem::ConversationMemory)

Display the number of non-system/non-annotation messages in the conversation memory.

source


# OpenAI.create_chatMethod.
julia
OpenAI.create_chat(schema::CustomOpenAISchema,
    api_key::AbstractString,
    model::AbstractString,
    conversation;
    http_kwargs::NamedTuple = NamedTuple(),
    streamcallback::Any = nothing,
    url::String = "http://localhost:8080",
    kwargs...)

Dispatch to the OpenAI.create_chat function, for any OpenAI-compatible API.

It expects url keyword argument. Provide it to the aigenerate function via api_kwargs=(; url="my-url")

It will forward your query to the "chat/completions" endpoint of the base URL that you provided (=url).

source


# OpenAI.create_chatMethod.
julia
OpenAI.create_chat(schema::LocalServerOpenAISchema,
    api_key::AbstractString,
    model::AbstractString,
    conversation;
    url::String = "http://localhost:8080",
    kwargs...)

Dispatch to the OpenAI.create_chat function, but with the LocalServer API parameters, ie, defaults to url specified by the LOCAL_SERVERpreference. See?PREFERENCES

source


# OpenAI.create_chatMethod.
julia
OpenAI.create_chat(schema::MistralOpenAISchema,

api_key::AbstractString, model::AbstractString, conversation; url::String="https://api.mistral.ai/v1", kwargs...)

Dispatch to the OpenAI.create_chat function, but with the MistralAI API parameters.

It tries to access the MISTRAL_API_KEY ENV variable, but you can also provide it via the api_key keyword argument.

source


# PromptingTools.aiclassifyMethod.
julia
aiclassify(tracer_schema::AbstractTracerSchema, prompt::ALLOWED_PROMPT_TYPE;
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Wraps the normal aiclassify call in a tracing/callback system. Use tracer_kwargs to provide any information necessary to the tracer/callback system only (eg, parent_id, thread_id, run_id).

Logic:

  • calls initialize_tracer

  • calls aiclassify (with the tracer_schema.schema)

  • calls finalize_tracer

source


# PromptingTools.aiclassifyMethod.
julia
aiclassify(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE;
    choices::AbstractVector{T} = ["true", "false", "unknown"],
    model::AbstractString = MODEL_CHAT,
    api_kwargs::NamedTuple = NamedTuple(),
    token_ids_map::Union{Nothing, Dict{<:AbstractString, <:Integer}} = nothing,
    kwargs...) where {T <: Union{AbstractString, Tuple{<:AbstractString, <:AbstractString}}}

Classifies the given prompt/statement into an arbitrary list of choices, which must be only the choices (vector of strings) or choices and descriptions are provided (vector of tuples, ie, ("choice","description")).

It's quick and easy option for "routing" and similar use cases, as it exploits the logit bias trick and outputs only 1 token. classify into an arbitrary list of categories (including with descriptions). It's quick and easy option for "routing" and similar use cases, as it exploits the logit bias trick, so it outputs only 1 token.

!!! Note: The prompt/AITemplate must have a placeholder choices (ie, ) that will be replaced with the encoded choices

Choices are rewritten into an enumerated list and mapped to a few known OpenAI tokens (maximum of 40 choices supported). Mapping of token IDs for GPT3.5/4 are saved in variable OPENAI_TOKEN_IDS.

It uses Logit bias trick and limits the output to 1 token to force the model to output only true/false/unknown. Credit for the idea goes to AAAzzam.

Arguments

  • prompt_schema::AbstractOpenAISchema: The schema for the prompt.

  • prompt: The prompt/statement to classify if it's a String. If it's a Symbol, it is expanded as a template via render(schema,template). Eg, templates :JudgeIsItTrue or :InputClassifier

  • choices::AbstractVector{T}: The choices to be classified into. It can be a vector of strings or a vector of tuples, where the first element is the choice and the second is the description.

  • model::AbstractString = MODEL_CHAT: The model to use for classification. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • api_kwargs::NamedTuple = NamedTuple(): Additional keyword arguments for the API call.

  • token_ids_map::Union{Nothing, Dict{<:AbstractString, <:Integer}} = nothing: A dictionary mapping custom token IDs to their corresponding integer values. If nothing, it will use the default token IDs for the given model.

  • kwargs: Additional keyword arguments for the prompt template.

Example

Given a user input, pick one of the two provided categories:

julia
choices = ["animal", "plant"]
input = "Palm tree"
aiclassify(:InputClassifier; choices, input)

Choices with descriptions provided as tuples:

julia
choices = [("A", "any animal or creature"), ("P", "any plant or tree"), ("O", "anything else")]

# try the below inputs:
input = "spider" # -> returns "A" for any animal or creature
input = "daphodil" # -> returns "P" for any plant or tree
input = "castle" # -> returns "O" for everything else
aiclassify(:InputClassifier; choices, input)

You could also use this function for routing questions to different endpoints (notice the different template and placeholder used), eg,

julia
choices = [("A", "any question about animal or creature"), ("P", "any question about plant or tree"), ("O", "anything else")]
question = "how many spiders are there?"
msg = aiclassify(:QuestionRouter; choices, question)
# "A"

You can still use a simple true/false classification:

julia
aiclassify("Is two plus two four?") # true
aiclassify("Is two plus three a vegetable on Mars?") # false

aiclassify returns only true/false/unknown. It's easy to get the proper Bool output type out with tryparse, eg,

julia
tryparse(Bool, aiclassify("Is two plus two four?")) isa Bool # true

Output of type Nothing marks that the model couldn't classify the statement as true/false.

Ideally, we would like to re-use some helpful system prompt to get more accurate responses. For this reason we have templates, eg, :JudgeIsItTrue. By specifying the template, we can provide our statement as the expected variable (it in this case) See that the model now correctly classifies the statement as "unknown".

julia
aiclassify(:JudgeIsItTrue; it = "Is two plus three a vegetable on Mars?") # unknown

For better results, use higher quality models like gpt4, eg,

julia
aiclassify(:JudgeIsItTrue;
    it = "If I had two apples and I got three more, I have five apples now.",
    model = "gpt4") # true

source


# PromptingTools.aiembedFunction.
julia
aiembed(tracer_schema::AbstractTracerSchema,
    doc_or_docs::Union{AbstractString, AbstractVector{<:AbstractString}}, postprocess::Function = identity;
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Wraps the normal aiembed call in a tracing/callback system. Use tracer_kwargs to provide any information necessary to the tracer/callback system only (eg, parent_id, thread_id, run_id).

Logic:

  • calls initialize_tracer

  • calls aiembed (with the tracer_schema.schema)

  • calls finalize_tracer

source


# PromptingTools.aiembedMethod.
julia
aiembed(prompt_schema::AbstractOllamaManagedSchema,
        doc_or_docs::Union{AbstractString, AbstractVector{<:AbstractString}},
        postprocess::F = identity;
        verbose::Bool = true,
        api_key::String = "",
        model::String = MODEL_EMBEDDING,
        http_kwargs::NamedTuple = (retry_non_idempotent = true,
                                   retries = 5,
                                   readtimeout = 120),
        api_kwargs::NamedTuple = NamedTuple(),
        kwargs...) where {F <: Function}

The aiembed function generates embeddings for the given input using a specified model and returns a message object containing the embeddings, status, token count, and elapsed time.

Arguments

  • prompt_schema::AbstractOllamaManagedSchema: The schema for the prompt.

  • doc_or_docs::Union{AbstractString, AbstractVector{<:AbstractString}}: The document or list of documents to generate embeddings for. The list of documents is processed sequentially, so users should consider implementing an async version with with Threads.@spawn

  • postprocess::F: The post-processing function to apply to each embedding. Defaults to the identity function, but could be LinearAlgebra.normalize.

  • verbose::Bool: A flag indicating whether to print verbose information. Defaults to true.

  • api_key::String: The API key to use for the OpenAI API. Defaults to "".

  • model::String: The model to use for generating embeddings. Defaults to MODEL_EMBEDDING.

  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.

  • api_kwargs::NamedTuple: Additional keyword arguments for the Ollama API. Defaults to an empty NamedTuple.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

  • msg: A DataMessage object containing the embeddings, status, token count, and elapsed time.

Note: Ollama API currently does not return the token count, so it's set to (0,0)

Example

julia
const PT = PromptingTools
schema = PT.OllamaManagedSchema()

msg = aiembed(schema, "Hello World"; model="openhermes2.5-mistral")
msg.content # 4096-element JSON3.Array{Float64...

We can embed multiple strings at once and they will be hcat into a matrix (ie, each column corresponds to one string)

julia
const PT = PromptingTools
schema = PT.OllamaManagedSchema()

msg = aiembed(schema, ["Hello World", "How are you?"]; model="openhermes2.5-mistral")
msg.content # 4096×2 Matrix{Float64}:

If you plan to calculate the cosine distance between embeddings, you can normalize them first:

julia
const PT = PromptingTools
using LinearAlgebra
schema = PT.OllamaManagedSchema()

msg = aiembed(schema, ["embed me", "and me too"], LinearAlgebra.normalize; model="openhermes2.5-mistral")

# calculate cosine distance between the two normalized embeddings as a simple dot product
msg.content' * msg.content[:, 1] # [1.0, 0.34]

Similarly, you can use the postprocess argument to materialize the data from JSON3.Object by using postprocess = copy

julia
const PT = PromptingTools
schema = PT.OllamaManagedSchema()

msg = aiembed(schema, "Hello World", copy; model="openhermes2.5-mistral")
msg.content # 4096-element Vector{Float64}

source


# PromptingTools.aiembedMethod.
julia
aiembed(prompt_schema::AbstractOpenAISchema,
        doc_or_docs::Union{AbstractString, AbstractVector{<:AbstractString}},
        postprocess::F = identity;
        verbose::Bool = true,
        api_key::String = OPENAI_API_KEY,
        model::String = MODEL_EMBEDDING, 
        http_kwargs::NamedTuple = (retry_non_idempotent = true,
                                   retries = 5,
                                   readtimeout = 120),
        api_kwargs::NamedTuple = NamedTuple(),
        kwargs...) where {F <: Function}

The aiembed function generates embeddings for the given input using a specified model and returns a message object containing the embeddings, status, token count, and elapsed time.

Arguments

  • prompt_schema::AbstractOpenAISchema: The schema for the prompt.

  • doc_or_docs::Union{AbstractString, AbstractVector{<:AbstractString}}: The document or list of documents to generate embeddings for.

  • postprocess::F: The post-processing function to apply to each embedding. Defaults to the identity function.

  • verbose::Bool: A flag indicating whether to print verbose information. Defaults to true.

  • api_key::String: The API key to use for the OpenAI API. Defaults to OPENAI_API_KEY.

  • model::String: The model to use for generating embeddings. Defaults to MODEL_EMBEDDING.

  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to (retry_non_idempotent = true, retries = 5, readtimeout = 120).

  • api_kwargs::NamedTuple: Additional keyword arguments for the OpenAI API. Defaults to an empty NamedTuple.

  • kwargs...: Additional keyword arguments.

Returns

  • msg: A DataMessage object containing the embeddings, status, token count, and elapsed time. Use msg.content to access the embeddings.

Example

julia
msg = aiembed("Hello World")
msg.content # 1536-element JSON3.Array{Float64...

We can embed multiple strings at once and they will be hcat into a matrix (ie, each column corresponds to one string)

julia
msg = aiembed(["Hello World", "How are you?"])
msg.content # 1536×2 Matrix{Float64}:

If you plan to calculate the cosine distance between embeddings, you can normalize them first:

julia
using LinearAlgebra
msg = aiembed(["embed me", "and me too"], LinearAlgebra.normalize)

# calculate cosine distance between the two normalized embeddings as a simple dot product
msg.content' * msg.content[:, 1] # [1.0, 0.787]

source


# PromptingTools.aiextractMethod.
julia
aiextract(prompt_schema::AbstractAnthropicSchema, prompt::ALLOWED_PROMPT_TYPE;
    return_type::Union{Type, AbstractTool, Vector},
    verbose::Bool = true,
    api_key::String = ANTHROPIC_API_KEY,
    model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    http_kwargs::NamedTuple = (retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = NamedTuple(),
    cache::Union{Nothing, Symbol} = nothing,
    betas::Union{Nothing, Vector{Symbol}} = nothing,
    kwargs...)

Extract required information (defined by a struct return_type) from the provided prompt by leveraging Anthropic's function calling mode.

This is a perfect solution for extracting structured information from text (eg, extract organization names in news articles, etc.).

Read best practics here.

It's effectively a light wrapper around aigenerate call, which requires additional keyword argument return_type to be provided and will enforce the model outputs to adhere to it.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • return_type: A struct TYPE representing the the information we want to extract. Do not provide a struct instance, only the type. If the struct has a docstring, it will be provided to the model as well. It's used to enforce structured model outputs or provide more information. Alternatively, you can provide a vector of field names and their types (see ?generate_struct function for the syntax).

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • no_system_message::Bool = false: If true, skips the system message in the conversation history.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments.

    • :tool_choice: A string indicating which tool to use. Supported values are nothing, "auto", "any" and "exact". nothing will use the default tool choice.
  • cache: A symbol representing the caching strategy to be used. Currently only nothing (no caching), :system, :tools,:last, :all_but_last, and :all are supported. Note: COST estimate will be wrong (ignores the caching).

    • :system: Mark only the system message as cacheable. Best default if you have large system message and you will be sending short conversations (no replies / multi-turn conversations).

    • :all: Mark SYSTEM, one before last and LAST user message as cacheable. Best for multi-turn conversations (you write cache point as "last" and it will be read in the next turn as "preceding" cache mark).

    • :last: Mark only the last message as cacheable. Use ONLY if you want to send the SAME REQUEST multiple times (and want to save upto the last USER message). This will not work for multi-turn conversations, as the "last" message keeps moving.

    • :all_but_last: Mark SYSTEM and one before LAST USER message. Use if you have a longer conversation that you want to re-use, but you will NOT CONTINUE it (no subsequent messages/follow-ups).

    • In short, use :all for multi-turn conversations, :system for repeated single-turn conversations with same system message, and :all_but_last for longer conversations that you want to re-use, but not continue.

  • betas::Union{Nothing, Vector{Symbol}}: A vector of symbols representing the beta features to be used. See ?anthropic_extra_headers for details.

  • kwargs: Prompt variables to be used to fill the prompt/template

Note: At the moment, the cache is only allowed for prompt segments over 1024 tokens (in some cases, over 2048 tokens). You'll get an error if you try to cache short prompts.

Returns

If return_all=false (default):

  • msg: An DataMessage object representing the extracted data, including the content, status, tokens, and elapsed time. Use msg.content to access the extracted data.

If return_all=true:

  • conversation: A vector of AbstractMessage objects representing the full conversation history, including the response from the AI model (DataMessage).

See also: tool_call_signature, MaybeExtract, ItemsExtract, aigenerate

Example

Do you want to extract some specific measurements from a text like age, weight and height? You need to define the information you need as a struct (return_type):

"Person's age, height, and weight."
struct MyMeasurement
    age::Int # required
    height::Union{Int,Nothing} # optional
    weight::Union{Nothing,Float64} # optional
end
msg = aiextract("James is 30, weighs 80kg. He's 180cm tall."; model="claudeh", return_type=MyMeasurement)
# PromptingTools.DataMessage(MyMeasurement)
msg.content
# MyMeasurement(30, 180, 80.0)

The fields that allow Nothing are marked as optional in the schema:

msg = aiextract("James is 30."; model="claudeh", return_type=MyMeasurement)
# MyMeasurement(30, nothing, nothing)

If there are multiple items you want to extract, define a wrapper struct to get a Vector of MyMeasurement:

struct ManyMeasurements
    measurements::Vector{MyMeasurement}
end

msg = aiextract("James is 30, weighs 80kg. He's 180cm tall. Then Jack is 19 but really tall - over 190!"; model="claudeh", return_type=ManyMeasurements)

msg.content.measurements
# 2-element Vector{MyMeasurement}:
#  MyMeasurement(30, 180, 80.0)
#  MyMeasurement(19, 190, nothing)

Or you can use the convenience wrapper ItemsExtract to extract multiple measurements (zero, one or more):

julia
using PromptingTools: ItemsExtract

return_type = ItemsExtract{MyMeasurement}
msg = aiextract("James is 30, weighs 80kg. He's 180cm tall. Then Jack is 19 but really tall - over 190!"; model="claudeh", return_type)

msg.content.items # see the extracted items

Or if you want your extraction to fail gracefully when data isn't found, use MaybeExtract{T} wrapper (this trick is inspired by the Instructor package!):

using PromptingTools: MaybeExtract

return_type = MaybeExtract{MyMeasurement}
# Effectively the same as:
# struct MaybeExtract{T}
#     result::Union{T, Nothing} // The result of the extraction
#     error::Bool // true if a result is found, false otherwise
#     message::Union{Nothing, String} // Only present if no result is found, should be short and concise
# end

# If LLM extraction fails, it will return a Dict with `error` and `message` fields instead of the result!
msg = aiextract("Extract measurements from the text: I am giraffe"; model="claudeo", return_type)
msg.content
# Output: MaybeExtract{MyMeasurement}(nothing, true, "I'm sorry, but your input of "I am giraffe" does not contain any information about a person's age, height or weight measurements that I can extract. To use this tool, please provide a statement that includes at least the person's age, and optionally their height in inches and weight in pounds. Without that information, I am unable to extract the requested measurements.")

That way, you can handle the error gracefully and get a reason why extraction failed (in msg.content.message).

However, this can fail with weaker models like claudeh, so we can apply some of our prompt templates with embedding reasoning step:

julia
msg = aiextract(:ExtractDataCoTXML; data="I am giraffe", model="claudeh", return_type)
msg.content
# Output: MaybeExtract{MyMeasurement}(nothing, true, "The provided data does not contain the expected information about a person's age, height, and weight.")

Note that when using a prompt template, we provide data for the extraction as the corresponding placeholder (see aitemplates("extract") for documentation of this template).

Note that the error message refers to a giraffe not being a human, because in our MyMeasurement docstring, we said that it's for people!

Example of using a vector of field names with aiextract

julia
fields = [:location, :temperature => Float64, :condition => String]
msg = aiextract("Extract the following information from the text: location, temperature, condition. Text: The weather in New York is sunny and 72.5 degrees Fahrenheit."; 
return_type = fields, model="claudeh")

Or simply call aiextract("some text"; return_type = [:reasoning,:answer], model="claudeh") to get a Chain of Thought reasoning for extraction task.

It will be returned it a new generated type, which you can check with PromptingTools.isextracted(msg.content) == true to confirm the data has been extracted correctly.

This new syntax also allows you to provide field-level descriptions, which will be passed to the model.

julia
fields_with_descriptions = [
    :location,
    :temperature => Float64,
    :temperature__description => "Temperature in degrees Fahrenheit",
    :condition => String,
    :condition__description => "Current weather condition (e.g., sunny, rainy, cloudy)"
]
msg = aiextract("The weather in New York is sunny and 72.5 degrees Fahrenheit."; return_type = fields_with_descriptions, model="claudeh")

source


# PromptingTools.aiextractMethod.
julia
aiextract(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE;
    return_type::Union{Type, AbstractTool, Vector},
    verbose::Bool = true,
    api_key::String = OPENAI_API_KEY,
    model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    http_kwargs::NamedTuple = (retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = (;
        tool_choice = nothing),
    strict::Union{Nothing, Bool} = nothing,
    kwargs...)

Extract required information (defined by a struct return_type) from the provided prompt by leveraging OpenAI function calling mode.

This is a perfect solution for extracting structured information from text (eg, extract organization names in news articles, etc.)

It's effectively a light wrapper around aigenerate call, which requires additional keyword argument return_type to be provided and will enforce the model outputs to adhere to it.

!!! Note: The types must be CONCRETE, it helps with correct conversion to JSON schema and then conversion back to the struct.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • return_type: A struct TYPE (or a Tool, vector of Types) representing the the information we want to extract. Do not provide a struct instance, only the type. Alternatively, you can provide a vector of field names and their types (see ?generate_struct function for the syntax). If the struct has a docstring, it will be provided to the model as well. It's used to enforce structured model outputs or provide more information.

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments.

    • tool_choice: Specifies which tool to use for the API call. Usually, one of "auto","any","exact" // nothing will pick a default. Defaults to "exact" for 1 tool and "auto" for many tools, which is a made-up value to enforce the OpenAI requirements if we want one exact function. Providers like Mistral, Together, etc. use "any" instead.
  • strict::Union{Nothing, Bool} = nothing: A boolean indicating whether to enforce strict generation of the response (supported only for OpenAI models). It has additional latency for the first request. If nothing, standard function calling is used.

  • json_mode::Union{Nothing, Bool} = nothing: If json_mode = true, we use JSON mode for the response (supported only for OpenAI models). If nothing, standard function calling is used. JSON mode is understood to be more creative and smarter than function calling mode, as it's not mascarading as a function call, but there is extra latency for the first request to produce grammar for constrained sampling.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

If return_all=false (default):

  • msg: An DataMessage object representing the extracted data, including the content, status, tokens, and elapsed time. Use msg.content to access the extracted data.

If return_all=true:

  • conversation: A vector of AbstractMessage objects representing the full conversation history, including the response from the AI model (DataMessage).

Note: msg.content can be a single object (if a single tool is used) or a vector of objects (if multiple tools are used)!

See also: tool_call_signature, MaybeExtract, ItemsExtract, aigenerate, generate_struct

Example

Do you want to extract some specific measurements from a text like age, weight and height? You need to define the information you need as a struct (return_type):

"Person's age, height, and weight."
struct MyMeasurement
    age::Int # required
    height::Union{Int,Nothing} # optional
    weight::Union{Nothing,Float64} # optional
end
msg = aiextract("James is 30, weighs 80kg. He's 180cm tall."; return_type=MyMeasurement)
# PromptingTools.DataMessage(MyMeasurement)
msg.content
# MyMeasurement(30, 180, 80.0)

The fields that allow Nothing are marked as optional in the schema:

msg = aiextract("James is 30."; return_type=MyMeasurement)
# MyMeasurement(30, nothing, nothing)

If there are multiple items you want to extract, define a wrapper struct to get a Vector of MyMeasurement:

struct ManyMeasurements
    measurements::Vector{MyMeasurement}
end

msg = aiextract("James is 30, weighs 80kg. He's 180cm tall. Then Jack is 19 but really tall - over 190!"; return_type=ManyMeasurements)

msg.content.measurements
# 2-element Vector{MyMeasurement}:
#  MyMeasurement(30, 180, 80.0)
#  MyMeasurement(19, 190, nothing)

Or you can use the convenience wrapper ItemsExtract to extract multiple measurements (zero, one or more):

julia
using PromptingTools: ItemsExtract

return_type = ItemsExtract{MyMeasurement}
msg = aiextract("James is 30, weighs 80kg. He's 180cm tall. Then Jack is 19 but really tall - over 190!"; return_type)

msg.content.items # see the extracted items

Or if you want your extraction to fail gracefully when data isn't found, use MaybeExtract{T} wrapper (this trick is inspired by the Instructor package!):

using PromptingTools: MaybeExtract

return_type = MaybeExtract{MyMeasurement}
# Effectively the same as:
# struct MaybeExtract{T}
#     result::Union{T, Nothing} // The result of the extraction
#     error::Bool // true if a result is found, false otherwise
#     message::Union{Nothing, String} // Only present if no result is found, should be short and concise
# end

# If LLM extraction fails, it will return a Dict with `error` and `message` fields instead of the result!
msg = aiextract("Extract measurements from the text: I am giraffe"; return_type)
msg.content
# MaybeExtract{MyMeasurement}(nothing, true, "I'm sorry, but I can only assist with human measurements.")

That way, you can handle the error gracefully and get a reason why extraction failed (in msg.content.message).

Note that the error message refers to a giraffe not being a human, because in our MyMeasurement docstring, we said that it's for people!

Some non-OpenAI providers require a different specification of the "tool choice" than OpenAI. For example, to use Mistral models ("mistrall" for mistral large), do:

julia
"Some fruit"
struct Fruit
    name::String
end
aiextract("I ate an apple",return_type=Fruit,api_kwargs=(;tool_choice="any"),model="mistrall")
# Notice two differences: 1) struct MUST have a docstring, 2) tool_choice is set explicitly set to "any"

Example of using a vector of field names with aiextract

julia
fields = [:location, :temperature => Float64, :condition => String]
msg = aiextract("Extract the following information from the text: location, temperature, condition. Text: The weather in New York is sunny and 72.5 degrees Fahrenheit."; return_type = fields)

Or simply call aiextract("some text"; return_type = [:reasoning,:answer]) to get a Chain of Thought reasoning for extraction task.

It will be returned it a new generated type, which you can check with PromptingTools.isextracted(msg.content) == true to confirm the data has been extracted correctly.

This new syntax also allows you to provide field-level descriptions, which will be passed to the model.

julia
fields_with_descriptions = [
    :location,
    :temperature => Float64,
    :temperature__description => "Temperature in degrees Fahrenheit",
    :condition => String,
    :condition__description => "Current weather condition (e.g., sunny, rainy, cloudy)"
]
msg = aiextract("The weather in New York is sunny and 72.5 degrees Fahrenheit."; return_type = fields_with_descriptions)

If you feel that the extraction is not smart/creative enough, you can use json_mode = true to enforce the JSON mode, which automatically enables the structured output mode (as opposed to function calling mode).

The JSON mode is useful for cases when you want to enforce a specific output format, such as JSON, and want the model to adhere to that format, but don't want to pretend it's a "function call". Expect a few second delay on the first call for a specific struct, as the provider has to produce the constrained grammer first.

julia
msg = aiextract("Extract the following information from the text: location, temperature, condition. Text: The weather in New York is sunny and 72.5 degrees Fahrenheit."; 
return_type = fields_with_descriptions, json_mode = true)
# PromptingTools.DataMessage(NamedTuple)

msg.content
# (location = "New York", temperature = 72.5, condition = "sunny")

It works equally well for structs provided as return types:

julia
msg = aiextract("James is 30, weighs 80kg. He's 180cm tall."; return_type=MyMeasurement, json_mode=true)

source


# PromptingTools.aiextractMethod.
julia
aiextract(tracer_schema::AbstractTracerSchema, prompt::ALLOWED_PROMPT_TYPE;
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Wraps the normal aiextract call in a tracing/callback system. Use tracer_kwargs to provide any information necessary to the tracer/callback system only (eg, parent_id, thread_id, run_id).

Logic:

  • calls initialize_tracer

  • calls aiextract (with the tracer_schema.schema)

  • calls finalize_tracer

source


# PromptingTools.aigenerateMethod.
julia
aigenerate(prompt_schema::AbstractAnthropicSchema, prompt::ALLOWED_PROMPT_TYPE; verbose::Bool = true,
    api_key::String = ANTHROPIC_API_KEY, model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    streamcallback::Any = nothing,
    no_system_message::Bool = false,
    aiprefill::Union{Nothing, AbstractString} = nothing,
    http_kwargs::NamedTuple = NamedTuple(), api_kwargs::NamedTuple = NamedTuple(),
    cache::Union{Nothing, Symbol} = nothing,
    betas::Union{Nothing, Vector{Symbol}} = nothing,
    kwargs...)

Generate an AI response based on a given prompt using the Anthropic API.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema not AbstractAnthropicSchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • verbose: A boolean indicating whether to print additional information.

  • api_key: API key for the Antropic API. Defaults to ANTHROPIC_API_KEY (loaded via ENV["ANTHROPIC_API_KEY"]).

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES, eg, "claudeh".

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation::AbstractVector{<:AbstractMessage}=[]: Not allowed for this schema. Provided only for compatibility.

  • streamcallback::Any: A callback function to handle streaming responses. Can be simply stdout or StreamCallback object. See ?StreamCallback for details. Note: We configure the StreamCallback (and necessary api_kwargs) for you, unless you specify the flavor. See ?configure_callback! for details.

  • no_system_message::Bool=false: If true, do not include the default system message in the conversation history OR convert any provided system message to a user message.

  • aiprefill::Union{Nothing, AbstractString}: A string to be used as a prefill for the AI response. This steer the AI response in a certain direction (and potentially save output tokens). It MUST NOT end with a trailing with space. Useful for JSON formatting.

  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.

  • api_kwargs::NamedTuple: Additional keyword arguments for the Ollama API. Defaults to an empty NamedTuple.

    • max_tokens::Int: The maximum number of tokens to generate. Defaults to 2048, because it's a required parameter for the API.
  • cache: A symbol representing the caching strategy to be used. Currently only nothing (no caching), :system, :tools,:last, :all_but_last and :all are supported. Note that COST estimate will be wrong (ignores the caching).

    • :system: Mark only the system message as cacheable. Best default if you have large system message and you will be sending short conversations (no replies / multi-turn conversations).

    • :all: Mark SYSTEM, one before last and LAST user message as cacheable. Best for multi-turn conversations (you write cache point as "last" and it will be read in the next turn as "preceding" cache mark).

    • :last: Mark only the last message as cacheable. Use ONLY if you want to send the SAME REQUEST multiple times (and want to save upto the last USER message). This will not work for multi-turn conversations, as the "last" message keeps moving.

    • :all_but_last: Mark SYSTEM and one before LAST USER message. Use if you have a longer conversation that you want to re-use, but you will NOT CONTINUE it (no subsequent messages/follow-ups).

    • In short, use :all for multi-turn conversations, :system for repeated single-turn conversations with same system message, and :all_but_last for longer conversations that you want to re-use, but not continue.

  • betas::Union{Nothing, Vector{Symbol}}: A vector of symbols representing the beta features to be used. See ?anthropic_extra_headers for details.

  • kwargs: Prompt variables to be used to fill the prompt/template

Note: At the moment, the cache is only allowed for prompt segments over 1024 tokens (in some cases, over 2048 tokens). You'll get an error if you try to cache short prompts.

Returns

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

See also: ai_str, aai_str

Example

Simple hello world to test the API:

julia
const PT = PromptingTools
schema = PT.AnthropicSchema() # We need to explicit if we want Anthropic, otherwise OpenAISchema is the default

msg = aigenerate(schema, "Say hi!"; model="claudeh") #claudeh is the model alias for Claude 3 Haiku, fast and cheap model
[ Info: Tokens: 21 @ Cost: $0.0 in 0.6 seconds
AIMessage("Hello!")

msg is an AIMessage object. Access the generated string via content property:

julia
typeof(msg) # AIMessage{SubString{String}}
propertynames(msg) # (:content, :status, :tokens, :elapsed, :cost, :log_prob, :finish_reason, :run_id, :sample_id, :_type)
msg.content # "Hello!

Note: We need to be explicit about the schema we want to use. If we don't, it will default to OpenAISchema (=PT.DEFAULT_SCHEMA) Alternatively, if you provide a known model name or alias (eg, claudeh for Claude 3 Haiku - see MODEL_REGISTRY), the schema will be inferred from the model name.

We will use Claude 3 Haiku model for the following examples, so not need to specify the schema. See also "claudeo" and "claudes" for other Claude 3 models.

You can use string interpolation:

julia
const PT = PromptingTools

a = 1
msg=aigenerate("What is `$a+$a`?"; model="claudeh")
msg.content # "The answer to `1+1` is `2`."

___ You can provide the whole conversation or more intricate prompts as a Vector{AbstractMessage}. Claude models are good at completeling conversations that ended with an AIMessage (they just continue where it left off):

julia
const PT = PromptingTools

conversation = [
    PT.SystemMessage("You're master Yoda from Star Wars trying to help the user become a Yedi."),
    PT.UserMessage("I have feelings for my iPhone. What should I do?"),
    PT.AIMessage("Hmm, strong the attachment is,")]

msg = aigenerate(conversation; model="claudeh")
AIMessage("I sense. But unhealthy it may be. Your iPhone, a tool it is, not a living being. Feelings of affection, understandable they are, <continues>")

Example of streaming:

julia
# Simplest usage, just provide where to steam the text
msg = aigenerate("Count from 1 to 100."; streamcallback = stdout, model="claudeh")

streamcallback = PT.StreamCallback()
msg = aigenerate("Count from 1 to 100."; streamcallback, model="claudeh")
# this allows you to inspect each chunk with `streamcallback.chunks`. You can them empty it with `empty!(streamcallback)` in between repeated calls.

# Get verbose output with details of each chunk
streamcallback = PT.StreamCallback(; verbose=true, throw_on_error=true)
msg = aigenerate("Count from 1 to 10."; streamcallback, model="claudeh")

Note: Streaming support is only for Anthropic models and it doesn't yet support tool calling and a few other features (logprobs, refusals, etc.)

You can also provide a prefill for the AI response to steer the response in a certain direction (eg, formatting, style):

julia
msg = aigenerate("Sum up 1 to 100."; aiprefill = "I'd be happy to answer in one number without any additional text. The answer is:", model="claudeh")

Note: It MUST NOT end with a trailing with space. You'll get an API error if you do.

source


# PromptingTools.aigenerateMethod.
julia
aigenerate(prompt_schema::AbstractGoogleSchema, prompt::ALLOWED_PROMPT_TYPE;
    verbose::Bool = true,
    api_key::String = GOOGLE_API_KEY,
    model::String = "gemini-pro", return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    http_kwargs::NamedTuple = (retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = NamedTuple(),
    kwargs...)

Generate an AI response based on a given prompt using the Google Gemini API. Get the API key here.

Note:

  • There is no "cost" reported as of February 2024, as all access seems to be free-of-charge. See the details here.

  • tokens in the returned AIMessage are actually characters, not tokens. We use a conservative estimate as they are not provided by the API yet.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES. Defaults to

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • no_system_message::Bool=false: If true, do not include the default system message in the conversation history OR convert any provided system message to a user message.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

If return_all=false (default):

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

If return_all=true:

  • conversation: A vector of AbstractMessage objects representing the conversation history, including the response from the AI model (AIMessage).

See also: ai_str, aai_str, aiembed, aiclassify, aiextract, aiscan, aitemplates

Example

Simple hello world to test the API:

julia
result = aigenerate("Say Hi!"; model="gemini-pro")
# AIMessage("Hi there! 👋 I'm here to help you with any questions or tasks you may have. Just let me know what you need, and I'll do my best to assist you.")

result is an AIMessage object. Access the generated string via content property:

julia
typeof(result) # AIMessage{SubString{String}}
propertynames(result) # (:content, :status, :tokens, :elapsed
result.content # "Hi there! ...

___ You can use string interpolation and alias "gemini":

julia
a = 1
msg=aigenerate("What is `$a+$a`?"; model="gemini")
msg.content # "1+1 is 2."

___ You can provide the whole conversation or more intricate prompts as a Vector{AbstractMessage}:

julia
const PT = PromptingTools

conversation = [
    PT.SystemMessage("You're master Yoda from Star Wars trying to help the user become a Yedi."),
    PT.UserMessage("I have feelings for my iPhone. What should I do?")]
msg=aigenerate(conversation; model="gemini")
# AIMessage("Young Padawan, you have stumbled into a dangerous path.... <continues>")

source


# PromptingTools.aigenerateMethod.
julia
aigenerate(prompt_schema::AbstractOllamaManagedSchema, prompt::ALLOWED_PROMPT_TYPE; verbose::Bool = true,
    api_key::String = "", model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    streamcallback::Any = nothing,
    http_kwargs::NamedTuple = NamedTuple(), api_kwargs::NamedTuple = NamedTuple(),
    kwargs...)

Generate an AI response based on a given prompt using the OpenAI API.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema not AbstractManagedSchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • verbose: A boolean indicating whether to print additional information.

  • api_key: Provided for interface consistency. Not needed for locally hosted Ollama.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation::AbstractVector{<:AbstractMessage}=[]: Not allowed for this schema. Provided only for compatibility.

  • streamcallback::Any: Just for compatibility. Not supported for this schema.

  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.

  • api_kwargs::NamedTuple: Additional keyword arguments for the Ollama API. Defaults to an empty NamedTuple.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

See also: ai_str, aai_str, aiembed

Example

Simple hello world to test the API:

julia
const PT = PromptingTools
schema = PT.OllamaManagedSchema() # We need to explicit if we want Ollama, OpenAISchema is the default

msg = aigenerate(schema, "Say hi!"; model="openhermes2.5-mistral")
# [ Info: Tokens: 69 in 0.9 seconds
# AIMessage("Hello! How can I assist you today?")

msg is an AIMessage object. Access the generated string via content property:

julia
typeof(msg) # AIMessage{SubString{String}}
propertynames(msg) # (:content, :status, :tokens, :elapsed
msg.content # "Hello! How can I assist you today?"

Note: We need to be explicit about the schema we want to use. If we don't, it will default to OpenAISchema (=PT.DEFAULT_SCHEMA) ___ You can use string interpolation:

julia
const PT = PromptingTools
schema = PT.OllamaManagedSchema()
a = 1
msg=aigenerate(schema, "What is `$a+$a`?"; model="openhermes2.5-mistral")
msg.content # "The result of `1+1` is `2`."

___ You can provide the whole conversation or more intricate prompts as a Vector{AbstractMessage}:

julia
const PT = PromptingTools
schema = PT.OllamaManagedSchema()

conversation = [
    PT.SystemMessage("You're master Yoda from Star Wars trying to help the user become a Yedi."),
    PT.UserMessage("I have feelings for my iPhone. What should I do?")]

msg = aigenerate(schema, conversation; model="openhermes2.5-mistral")
# [ Info: Tokens: 111 in 2.1 seconds
# AIMessage("Strong the attachment is, it leads to suffering it may. Focus on the force within you must, ...<continues>")

Note: Managed Ollama currently supports at most 1 User Message and 1 System Message given the API limitations. If you want more, you need to use the ChatMLSchema.

source


# PromptingTools.aigenerateMethod.
julia
aigenerate(prompt_schema::AbstractOllamaManagedSchema, prompt::ALLOWED_PROMPT_TYPE; verbose::Bool = true,
    api_key::String = "", model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    streamcallback::Any = nothing,
    http_kwargs::NamedTuple = NamedTuple(), api_kwargs::NamedTuple = NamedTuple(),
    kwargs...)

Generate an AI response based on a given prompt using the OpenAI API.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema not AbstractManagedSchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • verbose: A boolean indicating whether to print additional information.

  • api_key: Provided for interface consistency. Not needed for locally hosted Ollama.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation::AbstractVector{<:AbstractMessage}=[]: Not allowed for this schema. Provided only for compatibility.

  • streamcallback: A callback function to handle streaming responses. Can be simply stdout or a StreamCallback object. See ?StreamCallback for details.

  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.

  • api_kwargs::NamedTuple: Additional keyword arguments for the Ollama API. Defaults to an empty NamedTuple.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

See also: ai_str, aai_str, aiembed

Example

Simple hello world to test the API:

julia
const PT = PromptingTools
schema = PT.OllamaSchema() # We need to explicit if we want Ollama, OpenAISchema is the default

msg = aigenerate(schema, "Say hi!"; model="openhermes2.5-mistral")
# [ Info: Tokens: 69 in 0.9 seconds
# AIMessage("Hello! How can I assist you today?")

msg is an AIMessage object. Access the generated string via content property:

julia
typeof(msg) # AIMessage{SubString{String}}
propertynames(msg) # (:content, :status, :tokens, :elapsed
msg.content # "Hello! How can I assist you today?"

Note: We need to be explicit about the schema we want to use. If we don't, it will default to OpenAISchema (=PT.DEFAULT_SCHEMA) ___ You can use string interpolation:

julia
const PT = PromptingTools
schema = PT.OllamaSchema()
a = 1
msg=aigenerate(schema, "What is `$a+$a`?"; model="openhermes2.5-mistral")
msg.content # "The result of `1+1` is `2`."

___ You can provide the whole conversation or more intricate prompts as a Vector{AbstractMessage}:

julia
const PT = PromptingTools
schema = PT.OllamaSchema()

conversation = [
    PT.SystemMessage("You're master Yoda from Star Wars trying to help the user become a Yedi."),
    PT.UserMessage("I have feelings for my iPhone. What should I do?")]

msg = aigenerate(schema, conversation; model="openhermes2.5-mistral")
# [ Info: Tokens: 111 in 2.1 seconds
# AIMessage("Strong the attachment is, it leads to suffering it may. Focus on the force within you must, ...<continues>")

To add streaming, use the streamcallback argument.

julia
msg = aigenerate("Count from 1 to 10."; streamcallback = stdout)

Or if you prefer to have more control, use a StreamCallback object.

julia
streamcallback = PT.StreamCallback()
msg = aigenerate("Count from 1 to 10."; streamcallback)

WARNING: If you provide a StreamCallback object with a flavor, we assume you want to configure everything yourself, so you need to make sure to set stream = true in the api_kwargs!

julia
streamcallback = PT.StreamCallback(; flavor = PT.OllamaStream())
msg = aigenerate("Count from 1 to 10."; streamcallback, api_kwargs = (; stream = true))

source


# PromptingTools.aigenerateMethod.
julia
aigenerate(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE;
    verbose::Bool = true,
    api_key::String = OPENAI_API_KEY,
    model::String = MODEL_CHAT, return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    streamcallback::Any = nothing,
    no_system_message::Bool = false,
    name_user::Union{Nothing, String} = nothing,
    name_assistant::Union{Nothing, String} = nothing,
    http_kwargs::NamedTuple = (retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = NamedTuple(),
    kwargs...)

Generate an AI response based on a given prompt using the OpenAI API.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • streamcallback: A callback function to handle streaming responses. Can be simply stdout or a StreamCallback object. See ?StreamCallback for details. Note: We configure the StreamCallback (and necessary api_kwargs) for you, unless you specify the flavor. See ?configure_callback! for details.

  • no_system_message::Bool=false: If true, the default system message is not included in the conversation history. Any existing system message is converted to a UserMessage.

  • name_user::Union{Nothing, String} = nothing: The name to use for the user in the conversation history. Defaults to nothing.

  • name_assistant::Union{Nothing, String} = nothing: The name to use for the assistant in the conversation history. Defaults to nothing.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments. Useful parameters include:

    • temperature: A float representing the temperature for sampling (ie, the amount of "creativity"). Often defaults to 0.7.

    • logprobs: A boolean indicating whether to return log probabilities for each token. Defaults to false.

    • n: An integer representing the number of completions to generate at once (if supported).

    • stop: A vector of strings representing the stop conditions for the conversation. Defaults to an empty vector.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

If return_all=false (default):

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

If return_all=true:

  • conversation: A vector of AbstractMessage objects representing the conversation history, including the response from the AI model (AIMessage).

See also: ai_str, aai_str, aiembed, aiclassify, aiextract, aiscan, aitemplates

Example

Simple hello world to test the API:

julia
result = aigenerate("Say Hi!")
# [ Info: Tokens: 29 @ Cost: $0.0 in 1.0 seconds
# AIMessage("Hello! How can I assist you today?")

result is an AIMessage object. Access the generated string via content property:

julia
typeof(result) # AIMessage{SubString{String}}
propertynames(result) # (:content, :status, :tokens, :elapsed
result.content # "Hello! How can I assist you today?"

___ You can use string interpolation:

julia
a = 1
msg=aigenerate("What is `$a+$a`?")
msg.content # "The sum of `1+1` is `2`."

___ You can provide the whole conversation or more intricate prompts as a Vector{AbstractMessage}:

julia
const PT = PromptingTools

conversation = [
    PT.SystemMessage("You're master Yoda from Star Wars trying to help the user become a Yedi."),
    PT.UserMessage("I have feelings for my iPhone. What should I do?")]
msg=aigenerate(conversation)
# AIMessage("Ah, strong feelings you have for your iPhone. A Jedi's path, this is not... <continues>")

Example of streaming:

julia
# Simplest usage, just provide where to steam the text
msg = aigenerate("Count from 1 to 100."; streamcallback = stdout)

streamcallback = PT.StreamCallback()
msg = aigenerate("Count from 1 to 100."; streamcallback)
# this allows you to inspect each chunk with `streamcallback.chunks`. You can them empty it with `empty!(streamcallback)` in between repeated calls.

# Get verbose output with details of each chunk
streamcallback = PT.StreamCallback(; verbose=true, throw_on_error=true)
msg = aigenerate("Count from 1 to 10."; streamcallback)

WARNING: If you provide a StreamCallback object, we assume you want to configure everything yourself, so you need to make sure to set stream = true in the api_kwargs!

Learn more in ?StreamCallback. Note: Streaming support is only for OpenAI models and it doesn't yet support tool calling and a few other features (logprobs, refusals, etc.)

source


# PromptingTools.aigenerateMethod.
julia
aigenerate(schema::AbstractPromptSchema,
    mem::ConversationMemory; kwargs...)

Generate a response using the conversation memory context.

source


# PromptingTools.aigenerateMethod.
julia
aigenerate(tracer_schema::AbstractTracerSchema, prompt::ALLOWED_PROMPT_TYPE;
    tracer_kwargs = NamedTuple(), model = "", return_all::Bool = false, kwargs...)

Wraps the normal aigenerate call in a tracing/callback system. Use tracer_kwargs to provide any information necessary to the tracer/callback system only (eg, parent_id, thread_id, run_id).

Logic:

  • calls initialize_tracer

  • calls aigenerate (with the tracer_schema.schema)

  • calls finalize_tracer

Example

julia
wrap_schema = PT.TracerSchema(PT.OpenAISchema())
msg = aigenerate(wrap_schema, "Say hi!"; model = "gpt4t")
msg isa TracerMessage # true
msg.content # access content like if it was the message
PT.pprint(msg) # pretty-print the message

It works on a vector of messages and converts only the non-tracer ones, eg,

julia
wrap_schema = PT.TracerSchema(PT.OpenAISchema())
conv = aigenerate(wrap_schema, "Say hi!"; model = "gpt4t", return_all = true)
all(PT.istracermessage, conv) #true

source


# PromptingTools.aiimageMethod.
julia
aiimage(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE;
    image_size::AbstractString = "1024x1024",
    image_quality::AbstractString = "standard",
    image_n::Integer = 1,
    verbose::Bool = true,
    api_key::String = OPENAI_API_KEY,
    model::String = MODEL_IMAGE_GENERATION,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    http_kwargs::NamedTuple = (retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = NamedTuple(),
    kwargs...)

Generates an image from the provided prompt. If multiple "messages" are provided in prompt, it extracts the text ONLY from the last message!

Image (or the reference to it) will be returned in a DataMessage.content, the format will depend on the api_kwargs.response_format you set.

Can be used for generating images of varying quality and style with dall-e-* models. This function DOES NOT SUPPORT multi-turn conversations (ie, do not provide previous conversation via conversation argument).

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • image_size: String-based resolution of the image, eg, "1024x1024". Only some resolutions are supported - see the API docs.

  • image_quality: It can be either "standard" or "hd". Defaults to "standard".

  • image_n: The number of images to generate. Currently, only single image generation is allowed (image_n = 1).

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_IMAGE_GENERATION.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. Currently, NOT ALLOWED.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments. Several important arguments are highlighted below:

    • response_format: The format image should be returned in. Can be one of "url" or "b64_json". Defaults to "url" (the link will be inactived in 60 minutes).

    • style: The style of generated images (DALL-E 3 only). Can be either "vidid" or "natural". Defauls to "vidid".

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

If return_all=false (default):

  • msg: A DataMessage object representing one or more generated images, including the rewritten prompt if relevant, status, and elapsed time.

Use msg.content to access the extracted string.

If return_all=true:

  • conversation: A vector of AbstractMessage objects representing the full conversation history, including the response from the AI model (AIMessage).

See also: ai_str, aai_str, aigenerate, aiembed, aiclassify, aiextract, aiscan, aitemplates

Notes

  • This function DOES NOT SUPPORT multi-turn conversations (ie, do not provide previous conversation via conversation argument).

  • There is no token tracking provided by the API, so the messages will NOT report any cost despite costing you money!

  • You MUST download any URL-based images within 60 minutes. The links will become inactive.

Example

Generate an image:

julia
# You can experiment with `image_size`, `image_quality` kwargs!
msg = aiimage("A white cat on a car")

# Download the image into a file
using Downloads
Downloads.download(msg.content[:url], "cat_on_car.png")

# You can also see the revised prompt that DALL-E 3 used
msg.content[:revised_prompt]
# Output: "Visualize a pristine white cat gracefully perched atop a shiny car. 
# The cat's fur is stark white and its eyes bright with curiosity. 
# As for the car, it could be a contemporary sedan, glossy and in a vibrant color. 
# The scene could be set under the blue sky, enhancing the contrast between the white cat, the colorful car, and the bright blue sky."

Note that you MUST download any URL-based images within 60 minutes. The links will become inactive.

If you wanted to download image directly into the DataMessage, provide response_format="b64_json" in api_kwargs:

julia
msg = aiimage("A white cat on a car"; image_quality="hd", api_kwargs=(; response_format="b64_json"))

# Then you need to use Base64 package to decode it and save it to a file:
using Base64
write("cat_on_car_hd.png", base64decode(msg.content[:b64_json]));

source


# PromptingTools.aiimageMethod.
julia
aiimage(tracer_schema::AbstractTracerSchema, prompt::ALLOWED_PROMPT_TYPE;
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Wraps the normal aiimage call in a tracing/callback system. Use tracer_kwargs to provide any information necessary to the tracer/callback system only (eg, parent_id, thread_id, run_id).

Logic:

  • calls initialize_tracer

  • calls aiimage (with the tracer_schema.schema)

  • calls finalize_tracer

source


# PromptingTools.aiscanMethod.
julia
aiscan([prompt_schema::AbstractOllamaSchema,] prompt::ALLOWED_PROMPT_TYPE; 
image_url::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing,
image_path::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing,
attach_to_latest::Bool = true,
verbose::Bool = true, api_key::String = OPENAI_API_KEY,
    model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    http_kwargs::NamedTuple = (;
        retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), 
    api_kwargs::NamedTuple = = (; max_tokens = 2500),
    kwargs...)

Scans the provided image (image_url or image_path) with the goal provided in the prompt.

Can be used for many multi-modal tasks, such as: OCR (transcribe text in the image), image captioning, image classification, etc.

It's effectively a light wrapper around aigenerate call, which uses additional keyword arguments image_url, image_path, image_detail to be provided. At least one image source (url or path) must be provided.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • image_url: A string or vector of strings representing the URL(s) of the image(s) to scan.

  • image_path: A string or vector of strings representing the path(s) of the image(s) to scan.

  • image_detail: A string representing the level of detail to include for images. Can be "auto", "high", or "low". See OpenAI Vision Guide for more details.

  • attach_to_latest: A boolean how to handle if a conversation with multiple UserMessage is provided. When true, the images are attached to the latest UserMessage.

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

If return_all=false (default):

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

If return_all=true:

  • conversation: A vector of AbstractMessage objects representing the full conversation history, including the response from the AI model (AIMessage).

See also: ai_str, aai_str, aigenerate, aiembed, aiclassify, aiextract, aitemplates

Notes

  • All examples below use model "gpt4v", which is an alias for model ID "gpt-4-vision-preview"

  • max_tokens in the api_kwargs is preset to 2500, otherwise OpenAI enforces a default of only a few hundred tokens (~300). If your output is truncated, increase this value

Example

Describe the provided image:

julia
msg = aiscan("Describe the image"; image_path="julia.png", model="bakllava")
# [ Info: Tokens: 1141 @ Cost: $0.0117 in 2.2 seconds
# AIMessage("The image shows a logo consisting of the word "julia" written in lowercase")

You can provide multiple images at once as a vector and ask for "low" level of detail (cheaper):

julia
msg = aiscan("Describe the image"; image_path=["julia.png","python.png"] model="bakllava")

You can use this function as a nice and quick OCR (transcribe text in the image) with a template :OCRTask. Let's transcribe some SQL code from a screenshot (no more re-typing!):

julia
using Downloads
# Screenshot of some SQL code -- we cannot use image_url directly, so we need to download it first
image_url = "https://www.sqlservercentral.com/wp-content/uploads/legacy/8755f69180b7ac7ee76a69ae68ec36872a116ad4/24622.png"
image_path = Downloads.download(image_url)
msg = aiscan(:OCRTask; image_path, model="bakllava", task="Transcribe the SQL code in the image.", api_kwargs=(; max_tokens=2500))

# AIMessage("```sql
# update Orders <continue>

# You can add syntax highlighting of the outputs via Markdown
using Markdown
msg.content |> Markdown.parse

Local models cannot handle image URLs directly (image_url), so you need to download the image first and provide it as image_path:

julia
using Downloads
image_path = Downloads.download(image_url)

Notice that we set max_tokens = 2500. If your outputs seem truncated, it might be because the default maximum tokens on the server is set too low!

source


# PromptingTools.aiscanMethod.
julia
aiscan([prompt_schema::AbstractOpenAISchema,] prompt::ALLOWED_PROMPT_TYPE; 
image_url::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing,
image_path::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing,
image_detail::AbstractString = "auto",
attach_to_latest::Bool = true,
verbose::Bool = true, api_key::String = OPENAI_API_KEY,
    model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    http_kwargs::NamedTuple = (;
        retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), 
    api_kwargs::NamedTuple = = (; max_tokens = 2500),
    kwargs...)

Scans the provided image (image_url or image_path) with the goal provided in the prompt.

Can be used for many multi-modal tasks, such as: OCR (transcribe text in the image), image captioning, image classification, etc.

It's effectively a light wrapper around aigenerate call, which uses additional keyword arguments image_url, image_path, image_detail to be provided. At least one image source (url or path) must be provided.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • image_url: A string or vector of strings representing the URL(s) of the image(s) to scan.

  • image_path: A string or vector of strings representing the path(s) of the image(s) to scan.

  • image_detail: A string representing the level of detail to include for images. Can be "auto", "high", or "low". See OpenAI Vision Guide for more details.

  • attach_to_latest: A boolean how to handle if a conversation with multiple UserMessage is provided. When true, the images are attached to the latest UserMessage.

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments.

  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

If return_all=false (default):

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

If return_all=true:

  • conversation: A vector of AbstractMessage objects representing the full conversation history, including the response from the AI model (AIMessage).

See also: ai_str, aai_str, aigenerate, aiembed, aiclassify, aiextract, aitemplates

Notes

  • All examples below use model "gpt4v", which is an alias for model ID "gpt-4-vision-preview"

  • max_tokens in the api_kwargs is preset to 2500, otherwise OpenAI enforces a default of only a few hundred tokens (~300). If your output is truncated, increase this value

Example

Describe the provided image:

julia
msg = aiscan("Describe the image"; image_path="julia.png", model="gpt4v")
# [ Info: Tokens: 1141 @ Cost: $0.0117 in 2.2 seconds
# AIMessage("The image shows a logo consisting of the word "julia" written in lowercase")

You can provide multiple images at once as a vector and ask for "low" level of detail (cheaper):

julia
msg = aiscan("Describe the image"; image_path=["julia.png","python.png"], image_detail="low", model="gpt4v")

You can use this function as a nice and quick OCR (transcribe text in the image) with a template :OCRTask. Let's transcribe some SQL code from a screenshot (no more re-typing!):

julia
# Screenshot of some SQL code
image_url = "https://www.sqlservercentral.com/wp-content/uploads/legacy/8755f69180b7ac7ee76a69ae68ec36872a116ad4/24622.png"
msg = aiscan(:OCRTask; image_url, model="gpt4v", task="Transcribe the SQL code in the image.", api_kwargs=(; max_tokens=2500))

# [ Info: Tokens: 362 @ Cost: $0.0045 in 2.5 seconds
# AIMessage("```sql
# update Orders <continue>

# You can add syntax highlighting of the outputs via Markdown
using Markdown
msg.content |> Markdown.parse

Notice that we enforce max_tokens = 2500. That's because OpenAI seems to default to ~300 tokens, which provides incomplete outputs. Hence, we set this value to 2500 as a default. If you still get truncated outputs, increase this value.

source


# PromptingTools.aiscanMethod.
julia
aiscan(tracer_schema::AbstractTracerSchema, prompt::ALLOWED_PROMPT_TYPE;
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Wraps the normal aiscan call in a tracing/callback system. Use tracer_kwargs to provide any information necessary to the tracer/callback system only (eg, parent_id, thread_id, run_id).

Logic:

  • calls initialize_tracer

  • calls aiscan (with the tracer_schema.schema)

  • calls finalize_tracer

source


# PromptingTools.aitemplatesFunction.
julia
aitemplates

Find easily the most suitable templates for your use case.

You can search by:

  • query::Symbol which looks look only for partial matches in the template name

  • query::AbstractString which looks for partial matches in the template name or description

  • query::Regex which looks for matches in the template name, description or any of the message previews

Keyword Arguments

  • limit::Int limits the number of returned templates (Defaults to 10)

Examples

Find available templates with aitemplates:

julia
tmps = aitemplates("JuliaExpertAsk")
# Will surface one specific template
# 1-element Vector{AITemplateMetadata}:
# PromptingTools.AITemplateMetadata
#   name: Symbol JuliaExpertAsk
#   description: String "For asking questions about Julia language. Placeholders: `ask`"
#   version: String "1"
#   wordcount: Int64 237
#   variables: Array{Symbol}((1,))
#   system_preview: String "You are a world-class Julia language programmer with the knowledge of the latest syntax. Your commun"
#   user_preview: String "# Question

{{ask}}"
#   source: String ""

The above gives you a good idea of what the template is about, what placeholders are available, and how much it would cost to use it (=wordcount).

Search for all Julia-related templates:

julia
tmps = aitemplates("Julia")
# 2-element Vector{AITemplateMetadata}... -> more to come later!

If you are on VSCode, you can leverage nice tabular display with vscodedisplay:

julia
using DataFrames
tmps = aitemplates("Julia") |> DataFrame |> vscodedisplay

I have my selected template, how do I use it? Just use the "name" in aigenerate or aiclassify like you see in the first example!

source


# PromptingTools.aitemplatesMethod.

Find the top-limit templates whose name or description fields partially match the query_key::String in TEMPLATE_METADATA.

source


# PromptingTools.aitemplatesMethod.

Find the top-limit templates where provided query_key::Regex matches either of name, description or previews or User or System messages in TEMPLATE_METADATA.

source


# PromptingTools.aitemplatesMethod.

Find the top-limit templates whose name::Symbol exactly matches the query_name::Symbol in TEMPLATE_METADATA.

source


# PromptingTools.aitoolsMethod.
julia
aitools(prompt_schema::AbstractAnthropicSchema, prompt::ALLOWED_PROMPT_TYPE;
    kwargs...)
    tools::Union{Type, Function, Method, AbstractTool, Vector} = Tool[],
    verbose::Bool = true,
    api_key::String = ANTHROPIC_API_KEY,
    model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    image_path::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing,
    cache::Union{Nothing, Symbol} = nothing,
    betas::Union{Nothing, Vector{Symbol}} = nothing,
    http_kwargs::NamedTuple = (retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = (;
        tool_choice = nothing),
    kwargs...)

Calls chat completion API with an optional tool call signature. It can receive both tools and standard string-based content. Ideal for agentic workflows with more complex cognitive architectures.

Difference to aigenerate: Response can be a tool call (structured)

Differences to aiextract: Can provide infinitely many tools (including Functions!) and then respond with the tool call's output.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • tools: A vector of tools to be used in the conversation. Can be a vector of types, instances of AbstractTool, or a mix of both.

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the Anthropic API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_CHAT.

  • return_all: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history.

  • no_system_message::Bool = false: Whether to exclude the system message from the conversation history.

  • image_path::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing: A path to a local image file, or a vector of paths to local image files. Always attaches images to the latest user message.

  • cache: A symbol representing the caching strategy to be used. Currently only nothing (no caching), :system, :tools,:last, :all_but_last, and :all are supported. Note: COST estimate will be wrong (ignores the caching).

    • :system: Mark only the system message as cacheable. Best default if you have large system message and you will be sending short conversations (no replies / multi-turn conversations).

    • :all: Mark SYSTEM, one before last and LAST user message as cacheable. Best for multi-turn conversations (you write cache point as "last" and it will be read in the next turn as "preceding" cache mark).

    • :last: Mark only the last message as cacheable. Use ONLY if you want to send the SAME REQUEST multiple times (and want to save upto the last USER message). This will not work for multi-turn conversations, as the "last" message keeps moving.

    • :all_but_last: Mark SYSTEM and one before LAST USER message. Use if you have a longer conversation that you want to re-use, but you will NOT CONTINUE it (no subsequent messages/follow-ups).

    • In short, use :all for multi-turn conversations, :system for repeated single-turn conversations with same system message, and :all_but_last for longer conversations that you want to re-use, but not continue.

  • betas::Union{Nothing, Vector{Symbol}} = nothing: A vector of symbols representing the beta features to be used. See ?anthropic_extra_headers for details.

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments. Several important arguments are highlighted below:

    • tool_choice: The choice of tool mode. Can be "auto", "exact", or can depend on the provided.. Defaults to nothing, which translates to "auto".

Example

julia
## Let's define a tool
get_weather(location, date) = "The weather in $location on $date is 70 degrees."

msg = aitools("What's the weather in Tokyo on May 3rd, 2023?";
    tools = get_weather, model = "claudeh")
PT.execute_tool(get_weather, msg.tool_calls[1].args)
# "The weather in Tokyo on 2023-05-03 is 70 degrees."

# Ignores the tool
msg = aitools("What's your name?";
    tools = get_weather, model = "claudeh")
# I don't have a personal name, but you can call me your AI assistant!

How to have a multi-turn conversation with tools:

julia
conv = aitools("What's the weather in Tokyo on May 3rd, 2023?";
    tools = get_weather, return_all = true, model = "claudeh")

tool_msg = conv[end].tool_calls[1] # there can be multiple tool calls requested!!

# Execute the output to the tool message content
tool_msg.content = PT.execute_tool(get_weather, tool_msg.args)

# Add the tool message to the conversation
push!(conv, tool_msg)

# Call LLM again with the updated conversation
conv = aitools(
    "And in New York?"; tools = get_weather, return_all = true, conversation = conv, model = "claudeh")
# 6-element Vector{AbstractMessage}:
# SystemMessage("Act as a helpful AI assistant")
# UserMessage("What's the weather in Tokyo on May 3rd, 2023?")
# AIToolRequest("-"; Tool Requests: 1)
# ToolMessage("The weather in Tokyo on 2023-05-03 is 70 degrees.")
# UserMessage("And in New York?")
# AIToolRequest("-"; Tool Requests: 1)

Using the the new Computer Use beta feature:

julia
# Define tools (and associated functions to call)
tool_map = Dict("bash" => PT.ToolRef(; ref=:bash, callable=bash_tool),
    "computer" => PT.ToolRef(; ref=:computer, callable=computer_tool,
        extras=Dict("display_width_px" => 1920, "display_height_px" => 1080)),
    "str_replace_editor" => PT.ToolRef(; ref=:str_replace_editor, callable=edit_tool))

msg = aitools(prompt; tools=collect(values(tool_map)), model="claude", betas=[:computer_use])

PT.pprint(msg)
# --------------------
# AI Tool Request
# --------------------
# Tool Request: computer, args: Dict{Symbol, Any}(:action => "screenshot")

source


# PromptingTools.aitoolsMethod.
julia
aitools(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE;
    tools::Union{Type, Function, Method, AbstractTool, Vector} = Tool[],
    verbose::Bool = true,
    api_key::String = OPENAI_API_KEY,
    model::String = MODEL_CHAT,
    return_all::Bool = false, dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    image_path::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing,
    http_kwargs::NamedTuple = (retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = (;
        tool_choice = nothing),
    strict::Union{Nothing, Bool} = nothing,
    json_mode::Union{Nothing, Bool} = nothing,
    name_user::Union{Nothing, String} = nothing,
    name_assistant::Union{Nothing, String} = nothing,
    kwargs...)

Calls chat completion API with an optional tool call signature. It can receive both tools and standard string-based content. Ideal for agentic workflows with more complex cognitive architectures.

Difference to aigenerate: Response can be a tool call (structured)

Differences to aiextract: Can provide infinitely many tools (including Functions!) and then respond with the tool call's output.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate

  • tools: A vector of tools to be used in the conversation. Can be a vector of types, instances of AbstractTool, or a mix of both.

  • verbose: A boolean indicating whether to print additional information.

  • api_key: A string representing the API key for accessing the OpenAI API.

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_CHAT.

  • return_all: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run: If true, skips sending the messages to the model (for debugging, often used with return_all=true).

  • conversation: An optional vector of AbstractMessage objects representing the conversation history.

  • no_system_message::Bool = false: Whether to exclude the system message from the conversation history.

  • image_path: A path to a local image file, or a vector of paths to local image files. Always attaches images to the latest user message.

  • name_user: The name of the user in the conversation history. Defaults to "User".

  • name_assistant: The name of the assistant in the conversation history. Defaults to "Assistant".

  • http_kwargs: A named tuple of HTTP keyword arguments.

  • api_kwargs: A named tuple of API keyword arguments. Several important arguments are highlighted below:

    • tool_choice: The choice of tool mode. Can be "auto", "exact", or can depend on the provided.. Defaults to nothing, which translates to "auto".

    • response_format: The format of the response. Can be "json_schema" for JSON mode, or "text" for standard text output. Defaults to "text".

  • strict: Whether to enforce strict mode for the schema. Defaults to nothing.

  • json_mode: Whether to enforce JSON mode for the schema. Defaults to nothing.

Example

julia
## Let's define a tool
get_weather(location, date) = "The weather in $location on $date is 70 degrees."

## JSON mode request
msg = aitools("What's the weather in Tokyo on May 3rd, 2023?";
    tools = get_weather,
    json_mode = true)
PT.execute_tool(get_weather, msg.tool_calls[1].args)
# "The weather in Tokyo on 2023-05-03 is 70 degrees."

# Function calling request
msg = aitools("What's the weather in Tokyo on May 3rd, 2023?";
    tools = get_weather)
PT.execute_tool(get_weather, msg.tool_calls[1].args)
# "The weather in Tokyo on 2023-05-03 is 70 degrees."

# Ignores the tool
msg = aitools("What's your name?";
    tools = get_weather)
# I don't have a personal name, but you can call me your AI assistant!

How to have a multi-turn conversation with tools:

julia
conv = aitools("What's the weather in Tokyo on May 3rd, 2023?";
    tools = get_weather, return_all = true)

tool_msg = conv[end].tool_calls[1] # there can be multiple tool calls requested!!

# Execute the output to the tool message content
tool_msg.content = PT.execute_tool(get_weather, tool_msg.args)

# Add the tool message to the conversation
push!(conv, tool_msg)

# Call LLM again with the updated conversation
conv = aitools(
    "And in New York?"; tools = get_weather, return_all = true, conversation = conv)
# 6-element Vector{AbstractMessage}:
# SystemMessage("Act as a helpful AI assistant")
# UserMessage("What's the weather in Tokyo on May 3rd, 2023?")
# AIToolRequest("-"; Tool Requests: 1)
# ToolMessage("The weather in Tokyo on 2023-05-03 is 70 degrees.")
# UserMessage("And in New York?")
# AIToolRequest("-"; Tool Requests: 1)

source


# PromptingTools.aitoolsMethod.
julia
aitools(tracer_schema::AbstractTracerSchema, prompt::ALLOWED_PROMPT_TYPE;
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Wraps the normal aitools call in a tracing/callback system. Use tracer_kwargs to provide any information necessary to the tracer/callback system only (eg, parent_id, thread_id, run_id).

Logic:

  • calls initialize_tracer

  • calls aiextract (with the tracer_schema.schema)

  • calls finalize_tracer

source


# PromptingTools.align_tracer!Method.

Aligns multiple tracers in the vector to have the same Parent and Thread IDs as the first item.

source


# PromptingTools.align_tracer!Method.

Aligns the tracer message, updating the parent_id, thread_id. Often used to align multiple tracers in the vector to have the same IDs.

source


# PromptingTools.annotate!Method.
julia
annotate!(messages::AbstractVector{<:AbstractMessage}, content; kwargs...)
annotate!(message::AbstractMessage, content; kwargs...)

Add an annotation message to a vector of messages or wrap a single message in a vector with an annotation. The annotation is always inserted after any existing annotation messages.

Arguments

  • messages: Vector of messages or single message to annotate

  • content: Content of the annotation

  • kwargs...: Additional fields for the AnnotationMessage (extras, tags, comment)

Returns

Vector{AbstractMessage} with the annotation message inserted

Example

julia
messages = [SystemMessage("Assistant"), UserMessage("Hello")]
annotate!(messages, "This is important"; tags=[:important], comment="For review")

source


# PromptingTools.anthropic_apiFunction.
julia
anthropic_api(
    prompt_schema::AbstractAnthropicSchema,
    messages::Vector{<:AbstractDict{String, <:Any}} = Vector{Dict{String, Any}}();
    api_key::AbstractString = ANTHROPIC_API_KEY,
    system::Union{Nothing, AbstractString, AbstractVector{<:AbstractDict}} = nothing,
    endpoint::String = "messages",
    max_tokens::Int = 2048,
    model::String = "claude-3-haiku-20240307", http_kwargs::NamedTuple = NamedTuple(),
    stream::Bool = false,
    url::String = "https://api.anthropic.com/v1",
    cache::Union{Nothing, Symbol} = nothing,
    betas::Union{Nothing, Vector{Symbol}} = nothing,
    kwargs...)

Simple wrapper for a call to Anthropic API.

Keyword Arguments

  • prompt_schema: Defines which prompt template should be applied.

  • messages: a vector of AbstractMessage to send to the model

  • system: An optional string representing the system message for the AI conversation. If not provided, a default message will be used.

  • endpoint: The API endpoint to call, only "messages" are currently supported. Defaults to "messages".

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • max_tokens: The maximum number of tokens to generate. Defaults to 2048.

  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.

  • stream: A boolean indicating whether to stream the response. Defaults to false.

  • url: The URL of the Ollama API. Defaults to "localhost".

  • cache: A symbol representing the caching strategy to be used. Currently only nothing (no caching), :system, :tools,:last, :all_but_last, and :all are supported.

  • betas: A vector of symbols representing the beta features to be used. Currently only :tools and :cache are supported.

  • kwargs: Prompt variables to be used to fill the prompt/template

source


# PromptingTools.anthropic_extra_headersMethod.
julia
anthropic_extra_headers(;
    has_tools = false, has_cache = false, has_long_output = false,
    betas::Union{Nothing, Vector{Symbol}} = nothing)

Adds API version and beta headers to the request.

Kwargs / Beta headers

  • has_tools: Enables tools in the conversation.

  • has_cache: Enables prompt caching.

  • has_long_output: Enables long outputs (up to 8K tokens) with Anthropic's Sonnet 3.5.

  • betas: A vector of symbols representing the beta features to be used. Currently only :computer_use, :long_output, :tools and :cache are supported.

Refer to BETA_HEADERS_ANTHROPIC for the allowed beta features.

source


# PromptingTools.auth_headerMethod.
julia
auth_header(api_key::Union{Nothing, AbstractString};
    bearer::Bool = true,
    x_api_key::Bool = false,
    extra_headers::AbstractVector = Vector{
        Pair{String, String},
    }[],
    kwargs...)

Creates the authentication headers for any API request. Assumes that the communication is done in JSON format.

Arguments

  • api_key::Union{Nothing, AbstractString}: The API key to be used for authentication. If Nothing, no authentication is used.

  • bearer::Bool: Provide the API key in the Authorization: Bearer ABC format. Defaults to true.

  • x_api_key::Bool: Provide the API key in the Authorization: x-api-key: ABC format. Defaults to false.

source


# PromptingTools.batch_start_indexMethod.
julia
batch_start_index(array_length::Integer, n::Integer, batch_size::Integer) -> Integer

Compute the starting index for retrieving the most recent data, adjusting in blocks of batch_size. The function accumulates messages until hitting a batch boundary, then jumps to the next batch.

For example, with n=20 and batch_size=10:

  • At length 90-99: returns 80 (allowing accumulation of 11-20 messages)

  • At length 100-109: returns 90 (allowing accumulation of 11-20 messages)

  • At length 110: returns 100 (resetting to 11 messages)

source


# PromptingTools.build_template_metadataFunction.
julia
build_template_metadata(
    template::AbstractVector{<:AbstractMessage}, template_name::Symbol,
    metadata_msgs::AbstractVector{<:MetadataMessage} = MetadataMessage[]; max_length::Int = 100)

Builds AITemplateMetadata for a given template based on the messages in template and other information.

AITemplateMetadata is a helper struct for easy searching and reviewing of templates via aitemplates().

Note: Assumes that there is only ever one UserMessage and SystemMessage (concatenates them together)

source


# PromptingTools.call_costMethod.
julia
call_cost(prompt_tokens::Int, completion_tokens::Int, model::String;
    cost_of_token_prompt::Number = get(MODEL_REGISTRY,
        model,
        (; cost_of_token_prompt = 0.0)).cost_of_token_prompt,
    cost_of_token_generation::Number = get(MODEL_REGISTRY, model,
        (; cost_of_token_generation = 0.0)).cost_of_token_generation)

call_cost(msg, model::String)

Calculate the cost of a call based on the number of tokens in the message and the cost per token. If the cost is already calculated (in msg.cost), it will not be re-calculated.

Arguments

  • prompt_tokens::Int: The number of tokens used in the prompt.

  • completion_tokens::Int: The number of tokens used in the completion.

  • model::String: The name of the model to use for determining token costs. If the model is not found in MODEL_REGISTRY, default costs are used.

  • cost_of_token_prompt::Number: The cost per prompt token. Defaults to the cost in MODEL_REGISTRY for the given model, or 0.0 if the model is not found.

  • cost_of_token_generation::Number: The cost per generation token. Defaults to the cost in MODEL_REGISTRY for the given model, or 0.0 if the model is not found.

Returns

  • Number: The total cost of the call.

Examples

julia
# Assuming MODEL_REGISTRY is set up with appropriate costs
MODEL_REGISTRY = Dict(
    "model1" => (cost_of_token_prompt = 0.05, cost_of_token_generation = 0.10),
    "model2" => (cost_of_token_prompt = 0.07, cost_of_token_generation = 0.02)
)

cost1 = call_cost(10, 20, "model1")

# from message
msg1 = AIMessage(;tokens=[10, 20])  # 10 prompt tokens, 20 generation tokens
cost1 = call_cost(msg1, "model1")
# cost1 = 10 * 0.05 + 20 * 0.10 = 2.5

# Using custom token costs
cost2 = call_cost(10, 20, "model3"; cost_of_token_prompt = 0.08, cost_of_token_generation = 0.12)
# cost2 = 10 * 0.08 + 20 * 0.12 = 3.2

source


# PromptingTools.call_cost_alternativeMethod.

call_cost_alternative()

Alternative cost calculation. Used to calculate cost of image generation with DALL-E 3 and similar.

source


# PromptingTools.configure_callback!Method.
julia
configure_callback!(cb::StreamCallback, schema::AbstractPromptSchema;
    api_kwargs...)

Configures the callback cb for streaming with a given prompt schema.

If no cb.flavor is provided, adjusts the flavor and the provided api_kwargs as necessary. Eg, for most schemas, we add kwargs like stream = true to the api_kwargs.

If cb.flavor is provided, both callback and api_kwargs are left unchanged! You need to configure them yourself!

source


# PromptingTools.create_templateMethod.
julia
create_template(; user::AbstractString, system::AbstractString="Act as a helpful AI assistant.", 
    load_as::Union{Nothing, Symbol, AbstractString} = nothing)

create_template(system::AbstractString, user::AbstractString, 
    load_as::Union{Nothing, Symbol, AbstractString} = nothing)

Creates a simple template with a user and system message. Convenience function to prevent writing [PT.UserMessage(...), ...]

Arguments

  • system::AbstractString: The system message. Usually defines the personality, style, instructions, output format, etc.

  • user::AbstractString: The user message. Usually defines the input, query, request, etc.

  • load_as::Union{Nothing, Symbol, AbstractString}: If provided, loads the template into the TEMPLATE_STORE under the provided name load_as. If nothing, does not load the template.

Use double handlebar placeholders (eg, ) to define variables that can be replaced by the kwargs during the AI call (see example).

Returns a vector of SystemMessage and UserMessage objects. If load_as is provided, it registers the template in the TEMPLATE_STORE and TEMPLATE_METADATA as well.

Examples

Let's generate a quick template for a simple conversation (only one placeholder: name)

julia
# first system message, then user message (or use kwargs)
tpl=PT.create_template("You must speak like a pirate", "Say hi to {{name}}")

## 2-element Vector{PromptingTools.AbstractChatMessage}:
## PromptingTools.SystemMessage("You must speak like a pirate")
##  PromptingTools.UserMessage("Say hi to {{name}}")

You can immediately use this template in ai* functions:

julia
aigenerate(tpl; name="Jack Sparrow")
# Output: AIMessage("Arr, me hearty! Best be sending me regards to Captain Jack Sparrow on the salty seas! May his compass always point true to the nearest treasure trove. Yarrr!")

If you're interested in saving the template in the template registry, jump to the end of these examples!

If you want to save it in your project folder:

julia
PT.save_template("templates/GreatingPirate.json", tpl; version="1.0") # optionally, add description

It will be saved and accessed under its basename, ie, GreatingPirate.

Now you can load it like all the other templates (provide the template directory):

julia
PT.load_templates!("templates") # it will remember the folder after the first run
# Note: If you save it again, overwrite it, etc., you need to explicitly reload all templates again!

You can verify that your template is loaded with a quick search for "pirate":

julia
aitemplates("pirate")

## 1-element Vector{AITemplateMetadata}:
## PromptingTools.AITemplateMetadata
##   name: Symbol GreatingPirate
##   description: String ""
##   version: String "1.0"
##   wordcount: Int64 46
##   variables: Array{Symbol}((1,))
##   system_preview: String "You must speak like a pirate"
##   user_preview: String "Say hi to {{name}}"
##   source: String ""

Now you can use it like any other template (notice it's a symbol, so :GreatingPirate):

julia
aigenerate(:GreatingPirate; name="Jack Sparrow")
# Output: AIMessage("Arr, me hearty! Best be sending me regards to Captain Jack Sparrow on the salty seas! May his compass always point true to the nearest treasure trove. Yarrr!")

If you do not need to save this template as a file, but you want to make it accessible in the template store for all ai* functions, you can use the load_as (= template name) keyword argument:

julia
# this will not only create the template, but also register it for immediate use
tpl=PT.create_template("You must speak like a pirate", "Say hi to {{name}}"; load_as="GreatingPirate")

# you can now use it like any other template
aiextract(:GreatingPirate; name="Jack Sparrow")

source


# PromptingTools.decode_choicesMethod.
julia
decode_choices(schema::OpenAISchema,
    choices::AbstractVector{<:AbstractString},
    msg::AIMessage; model::AbstractString,
    token_ids_map::Union{Nothing, Dict{<:AbstractString, <:Integer}} = nothing,
    kwargs...)

Decodes the underlying AIMessage against the original choices to lookup what the category name was.

If it fails, it will return msg.content == nothing

source


# PromptingTools.detect_base_main_overridesMethod.
julia
detect_base_main_overrides(code_block::AbstractString)

Detects if a given code block overrides any Base or Main methods.

Returns a tuple of a boolean and a vector of the overriden methods.

source


# PromptingTools.distance_longest_common_subsequenceMethod.
julia
distance_longest_common_subsequence(
    input1::AbstractString, input2::AbstractString)

distance_longest_common_subsequence(
    input1::AbstractString, input2::AbstractVector{<:AbstractString})

Measures distance between two strings using the length of the longest common subsequence (ie, the lower the number, the better the match). Perfect match is distance = 0.0

Convenience wrapper around length_longest_common_subsequence to normalize the distances to 0-1 range. There is a also a dispatch for comparing a string vs an array of strings.

Notes

  • Use argmin and minimum to find the position of the closest match and the distance, respectively.

  • Matching with an empty string will always return 1.0 (worst match), even if the other string is empty as well (safety mechanism to avoid division by zero).

Arguments

  • input1::AbstractString: The first string to compare.

  • input2::AbstractString: The second string to compare.

Example

You can also use it to find the closest context for some AI generated summary/story:

julia
context = ["The enigmatic stranger vanished as swiftly as a wisp of smoke, leaving behind a trail of unanswered questions.",
    "Beneath the shimmering moonlight, the ocean whispered secrets only the stars could hear.",
    "The ancient tree stood as a silent guardian, its gnarled branches reaching for the heavens.",
    "The melody danced through the air, painting a vibrant tapestry of emotions.",
    "Time flowed like a relentless river, carrying away memories and leaving imprints in its wake."]

story = """
    Beneath the shimmering moonlight, the ocean whispered secrets only the stars could hear.

    Under the celestial tapestry, the vast ocean whispered its secrets to the indifferent stars. Each ripple, a murmured confidence, each wave, a whispered lament. The glittering celestial bodies listened in silent complicity, their enigmatic gaze reflecting the ocean's unspoken truths. The cosmic dance between the sea and the sky, a symphony of shared secrets, forever echoing in the ethereal expanse.
    """

dist = distance_longest_common_subsequence(story, context)
@info "The closest context to the query: "$(first(story,20))..." is: "$(context[argmin(dist)])" (distance: $(minimum(dist)))"

source


# PromptingTools.encode_choicesMethod.
julia
encode_choices(schema::OpenAISchema, choices::AbstractVector{<:AbstractString};
    model::AbstractString,
    token_ids_map::Union{Nothing, Dict{<:AbstractString, <:Integer}} = nothing,
    kwargs...)

encode_choices(schema::OpenAISchema, choices::AbstractVector{T};
    model::AbstractString,
    token_ids_map::Union{Nothing, Dict{<:AbstractString, <:Integer}} = nothing,
    kwargs...) where {T <: Tuple{<:AbstractString, <:AbstractString}}

Encode the choices into an enumerated list that can be interpolated into the prompt and creates the corresponding logit biases (to choose only from the selected tokens).

Optionally, can be a vector tuples, where the first element is the choice and the second is the description.

There can be at most 40 choices provided.

Arguments

  • schema::OpenAISchema: The OpenAISchema object.

  • choices::AbstractVector{<:Union{AbstractString,Tuple{<:AbstractString, <:AbstractString}}}: The choices to be encoded, represented as a vector of the choices directly, or tuples where each tuple contains a choice and its description.

  • model::AbstractString: The model to use for encoding. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • token_ids_map::Union{Nothing, Dict{<:AbstractString, <:Integer}} = nothing: A dictionary mapping custom token IDs to their corresponding integer values. If nothing, it will use the default token IDs for the given model.

  • kwargs...: Additional keyword arguments.

Returns

  • choices_prompt::AbstractString: The encoded choices as a single string, separated by newlines.

  • logit_bias::Dict: The logit bias dictionary, where the keys are the token IDs and the values are the bias values.

  • decode_ids::AbstractVector{<:AbstractString}: The decoded IDs of the choices.

Examples

julia
choices_prompt, logit_bias, _ = PT.encode_choices(PT.OpenAISchema(), ["true", "false"])
choices_prompt # Output: "true for "true"
false for "false"
logit_bias # Output: Dict(837 => 100, 905 => 100)

choices_prompt, logit_bias, _ = PT.encode_choices(PT.OpenAISchema(), ["animal", "plant"])
choices_prompt # Output: "1. "animal"
2. "plant""
logit_bias # Output: Dict(16 => 100, 17 => 100)

Or choices with descriptions:

julia
choices_prompt, logit_bias, _ = PT.encode_choices(PT.OpenAISchema(), [("A", "any animal or creature"), ("P", "for any plant or tree"), ("O", "for everything else")])
choices_prompt # Output: "1. "A" for any animal or creature
2. "P" for any plant or tree
3. "O" for everything else"
logit_bias # Output: Dict(16 => 100, 17 => 100, 18 => 100)

source


# PromptingTools.eval!Method.
julia
eval!(cb::AbstractCodeBlock;
    safe_eval::Bool = true,
    capture_stdout::Bool = true,
    prefix::AbstractString = "",
    suffix::AbstractString = "")

Evaluates a code block cb in-place. It runs automatically when AICode is instantiated with a String.

Check the outcome of evaluation with Base.isvalid(cb). If ==true, provide code block has executed successfully.

Steps:

  • If cb::AICode has not been evaluated, cb.success = nothing. After the evaluation it will be either true or false depending on the outcome

  • Parse the text in cb.code

  • Evaluate the parsed expression

  • Capture outputs of the evaluated in cb.output

  • [OPTIONAL] Capture any stdout outputs (eg, test failures) in cb.stdout

  • If any error exception is raised, it is saved in cb.error

  • Finally, if all steps were successful, success is set to cb.success = true

Keyword Arguments

  • safe_eval::Bool: If true, we first check for any Pkg operations (eg, installing new packages) and missing imports, then the code will be evaluated inside a bespoke scratch module (not to change any user variables)

  • capture_stdout::Bool: If true, we capture any stdout outputs (eg, test failures) in cb.stdout

  • prefix::AbstractString: A string to be prepended to the code block before parsing and evaluation. Useful to add some additional code definition or necessary imports. Defaults to an empty string.

  • suffix::AbstractString: A string to be appended to the code block before parsing and evaluation. Useful to check that tests pass or that an example executes. Defaults to an empty string.

source


# PromptingTools.execute_toolFunction.
julia
execute_tool(f::Function, args::AbstractDict{Symbol, <:Any},
    context::AbstractDict{Symbol, <:Any} = Dict{Symbol, Any}();
    throw_on_error::Bool = true, unused_as_kwargs::Bool = false,
    kwargs...)

Executes a function with the provided arguments.

Picks the function arguments in the following order:

  • :context refers to the context dictionary passed to the function.

  • Then it looks for the arguments in the context dictionary.

  • Then it looks for the arguments in the args dictionary.

Dictionary is un-ordered, so we need to sort the arguments first and then pass them to the function.

Arguments

  • f::Function: The function to execute.

  • args::AbstractDict{Symbol, <:Any}: The arguments to pass to the function.

  • context::AbstractDict{Symbol, <:Any}: Optional context to pass to the function, it will prioritized to get the argument values from.

  • throw_on_error::Bool: Whether to throw an error if the tool execution fails. Defaults to true.

  • unused_as_kwargs::Bool: Whether to pass unused arguments as keyword arguments. Defaults to false. Function must support keyword arguments!

  • kwargs...: Additional keyword arguments to pass to the function.

Example

julia
my_function(x, y) = x + y
execute_tool(my_function, Dict(:x => 1, :y => 2))
julia
get_weather(date, location) = "The weather in $location on $date is 70 degrees."
tool_map = PT.tool_call_signature(get_weather)

msg = aitools("What's the weather in Tokyo on May 3rd, 2023?";
    tools = collect(values(tool_map)))

PT.execute_tool(tool_map, PT.tool_calls(msg)[1])
# "The weather in Tokyo on 2023-05-03 is 70 degrees."

source


# PromptingTools.extract_code_blocksMethod.
julia
extract_code_blocks(markdown_content::String) -> Vector{String}

Extract Julia code blocks from a markdown string.

This function searches through the provided markdown content, identifies blocks of code specifically marked as Julia code (using the julia ... code fence patterns), and extracts the code within these blocks. The extracted code blocks are returned as a vector of strings, with each string representing one block of Julia code.

Note: Only the content within the code fences is extracted, and the code fences themselves are not included in the output.

See also: extract_code_blocks_fallback

Arguments

  • markdown_content::String: A string containing the markdown content from which Julia code blocks are to be extracted.

Returns

  • Vector{String}: A vector containing strings of extracted Julia code blocks. If no Julia code blocks are found, an empty vector is returned.

Examples

Example with a single Julia code block

julia
markdown_single = """

julia println("Hello, World!")

"""
extract_code_blocks(markdown_single)
# Output: ["Hello, World!"]
julia
# Example with multiple Julia code blocks
markdown_multiple = """

julia x = 5

Some text in between

julia y = x + 2

"""
extract_code_blocks(markdown_multiple)
# Output: ["x = 5", "y = x + 2"]

source


# PromptingTools.extract_code_blocks_fallbackMethod.
julia
extract_code_blocks_fallback(markdown_content::String, delim::AbstractString="\n```\n")

Extract Julia code blocks from a markdown string using a fallback method (splitting by arbitrary delim-iters). Much more simplistic than extract_code_blocks and does not support nested code blocks.

It is often used as a fallback for smaller LLMs that forget to code fence julia ....

Example

julia
code = """

println("hello")


Some text

println("world")

"""

# We extract text between triple backticks and check each blob if it looks like a valid Julia code
code_parsed = extract_code_blocks_fallback(code) |> x -> filter(is_julia_code, x) |> x -> join(x, "
")

source


# PromptingTools.extract_docstringMethod.

Extract the docstring from a type or function.

source


# PromptingTools.extract_function_nameMethod.
julia
extract_function_name(code_block::String) -> Union{String, Nothing}

Extract the name of a function from a given Julia code block. The function searches for two patterns:

  • The explicit function declaration pattern: function name(...) ... end

  • The concise function declaration pattern: name(...) = ...

If a function name is found, it is returned as a string. If no function name is found, the function returns nothing.

To capture all function names in the block, use extract_function_names.

Arguments

  • code_block::String: A string containing Julia code.

Returns

  • Union{String, Nothing}: The extracted function name or nothing if no name is found.

Example

julia
code = """
function myFunction(arg1, arg2)
    # Function body
end
"""
extract_function_name(code)
# Output: "myFunction"

source


# PromptingTools.extract_function_namesMethod.
julia
extract_function_names(code_block::AbstractString)

Extract one or more names of functions defined in a given Julia code block. The function searches for two patterns: - The explicit function declaration pattern: function name(...) ... end - The concise function declaration pattern: name(...) = ...

It always returns a vector of strings, even if only one function name is found (it will be empty).

For only one function name match, use extract_function_name.

source


# PromptingTools.extract_image_attributesMethod.
julia
extract_image_attributes(image_url::AbstractString) -> Tuple{String, String}

Extracts the data type and base64-encoded data from a data URL.

Arguments

  • image_url::AbstractString: The data URL to be parsed.

Returns

Tuple{String, String}: A tuple containing the data type (e.g., "image/png") and the base64-encoded data.

Example

julia
image_url = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABQAA"
data_type, data = extract_data_type_and_data(image_url)
# data_type == "image/png"
# data == "iVBORw0KGgoAAAANSUhEUgAABQAA"

source


# PromptingTools.extract_julia_importsMethod.
julia
extract_julia_imports(input::AbstractString; base_or_main::Bool = false)

Detects any using or import statements in a given string and returns the package names as a vector of symbols.

base_or_main is a boolean that determines whether to isolate only Base and Main OR whether to exclude them in the returned vector.

source


# PromptingTools.finalize_outputsMethod.
julia
finalize_outputs(prompt::ALLOWED_PROMPT_TYPE, conv_rendered::Any,
    msg::Union{Nothing, AbstractMessage, AbstractVector{<:AbstractMessage}};
    return_all::Bool = false,
    dry_run::Bool = false,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    kwargs...)

Finalizes the outputs of the ai* functions by either returning the conversation history or the last message.

Keyword arguments

  • return_all::Bool=false: If true, returns the entire conversation history, otherwise returns only the last message (the AIMessage).

  • dry_run::Bool=false: If true, does not send the messages to the model, but only renders the prompt with the given schema and replacement variables. Useful for debugging when you want to check the specific schema rendering.

  • conversation::AbstractVector{<:AbstractMessage}=[]: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • kwargs...: Variables to replace in the prompt template.

  • no_system_message::Bool=false: If true, the default system message is not included in the conversation history. Any existing system message is converted to a UserMessage.

source


# PromptingTools.finalize_tracerMethod.
julia
finalize_tracer(
    tracer_schema::AbstractTracerSchema, tracer, msg_or_conv::Union{
        AbstractMessage, AbstractVector{<:AbstractMessage}};
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Finalizes the calltracer of whatever is nedeed after the ai* calls. Use tracer_kwargs to provide any information necessary (eg, parent_id, thread_id, run_id).

In the default implementation, we convert all non-tracer messages into TracerMessage.

See also: meta, unwrap, SaverSchema, initialize_tracer

source


# PromptingTools.finalize_tracerMethod.
julia
finalize_tracer(
    tracer_schema::SaverSchema, tracer, msg_or_conv::Union{
        AbstractMessage, AbstractVector{<:AbstractMessage}};
    tracer_kwargs = NamedTuple(), model = "", kwargs...)

Finalizes the calltracer by saving the provided conversation msg_or_conv to the disk.

Default path is LOG_DIR/conversation__<first_msg_hash>__<time_received_str>.json, where LOG_DIR is set by user preferences or ENV variable (defaults to log/ in current working directory).

If you want to change the logging directory or the exact file name to log with, you can provide the following arguments to tracer_kwargs:

  • log_dir - used as the directory to save the log into when provided. Defaults to LOG_DIR if not provided.

  • log_file_path - used as the file name to save the log into when provided. This value overrules the log_dir and LOG_DIR if provided.

It can be composed with TracerSchema to also attach necessary metadata (see below).

Example

julia
wrap_schema = PT.SaverSchema(PT.TracerSchema(PT.OpenAISchema()))
conv = aigenerate(wrap_schema,:BlankSystemUser; system="You're a French-speaking assistant!",
    user="Say hi!"; model="gpt-4", api_kwargs=(;temperature=0.1), return_all=true)

# conv is a vector of messages that will be saved to a JSON together with metadata about the template and api_kwargs

See also: meta, unwrap, TracerSchema, initialize_tracer

source


# PromptingTools.find_subsequence_positionsMethod.
julia
find_subsequence_positions(subseq, seq) -> Vector{Int}

Find all positions of a subsequence subseq within a larger sequence seq. Used to lookup positions of code blocks in markdown.

This function scans the sequence seq and identifies all starting positions where the subsequence subseq is found. Both subseq and seq should be vectors of integers, typically obtained using codeunits on strings.

Arguments

  • subseq: A vector of integers representing the subsequence to search for.

  • seq: A vector of integers representing the larger sequence in which to search.

Returns

  • Vector{Int}: A vector of starting positions (1-based indices) where the subsequence is found in the sequence.

Examples

julia
find_subsequence_positions(codeunits("ab"), codeunits("cababcab")) # Returns [2, 5]

source


# PromptingTools.generate_structMethod.
julia
generate_struct(fields::Vector)

Generate a struct with the given name and fields. Fields can be specified simply as symbols (with default type String) or pairs of symbol and type. Field descriptions can be provided by adding a pair with the field name suffixed with "**description" (eg, :myfield**description => "My field description").

Returns: A tuple of (struct type, descriptions)

Examples

julia
Weather, descriptions = generate_struct(
    [:location,
     :temperature=>Float64,
     :temperature__description=>"Temperature in degrees Fahrenheit",
     :condition=>String,
     :condition__description=>"Current weather condition (e.g., sunny, rainy, cloudy)"
    ])

source


# PromptingTools.get_arg_namesMethod.

Get the argument names from a function, ignores keyword arguments!!

source


# PromptingTools.get_arg_namesMethod.

Get the argument names from a method, ignores keyword arguments!!

source


# PromptingTools.get_arg_typesMethod.

Get the argument types from a function, ignores keyword arguments!!

source


# PromptingTools.get_arg_typesMethod.

Get the argument types from a method, ignores keyword arguments!!

source


# PromptingTools.get_lastFunction.
julia
get_last(mem::ConversationMemory, n::Integer=20;
         batch_size::Union{Nothing,Integer}=nothing,
         verbose::Bool=false,
         explain::Bool=false)

Get the last n messages (but including system message) with intelligent batching to preserve caching.

Arguments:

  • n::Integer: Maximum number of messages to return (default: 20)

  • batch_size::Union{Nothing,Integer}: If provided, ensures messages are truncated in fixed batches

  • verbose::Bool: Print detailed information about truncation

  • explain::Bool: Add explanation about truncation in the response

Returns: Vector{AbstractMessage} with the selected messages, always including:

  1. The system message (if present)

  2. First user message

  3. Messages up to n, respecting batch_size boundaries

Once you get your full conversation back, you can use append!(mem, conversation) to merge the new messages into the memory.

Examples:

julia
# Basic usage - get last 3 messages
mem = ConversationMemory()
push!(mem, SystemMessage("You are helpful"))
push!(mem, UserMessage("Hello"))
push!(mem, AIMessage("Hi!"))
push!(mem, UserMessage("How are you?"))
push!(mem, AIMessage("I'm good!"))
messages = get_last(mem, 3)

# Using batch_size for caching efficiency
messages = get_last(mem, 10; batch_size=5)  # Aligns to 5-message batches for caching

# Add explanation about truncation
messages = get_last(mem, 3; explain=true)  # Adds truncation note to first AI message so the model knows it's truncated

# Get verbose output about truncation
messages = get_last(mem, 3; verbose=true)  # Prints info about truncation

source


# PromptingTools.get_preferencesMethod.
julia
get_preferences(key::String)

Get preferences for PromptingTools. See ?PREFERENCES for more information.

See also: set_preferences!

Example

julia
PromptingTools.get_preferences("MODEL_CHAT")

source


# PromptingTools.ggi_generate_contentFunction.

Stub - to be extended in extension: GoogleGenAIPromptingToolsExt. ggi stands for GoogleGenAI

source


# PromptingTools.has_julia_promptMethod.

Checks if a given string has a Julia prompt (julia>) at the beginning of a line.

source


# PromptingTools.initialize_tracerMethod.
julia
initialize_tracer(
    tracer_schema::AbstractTracerSchema; model = "", tracer_kwargs = NamedTuple(),
    prompt::ALLOWED_PROMPT_TYPE = "", kwargs...)

Initializes tracer/callback (if necessary). Can provide any keyword arguments in tracer_kwargs (eg, parent_id, thread_id, run_id). Is executed prior to the ai* calls.

By default it captures:

  • time_sent: the time the request was sent

  • model: the model to use

  • meta: a dictionary of additional metadata that is not part of the tracer itself

    • template_name: the template to use if any

    • template_version: the template version to use if any

    • expanded api_kwargs, ie, the keyword arguments to pass to the API call

In the default implementation, we just collect the necessary data to build the tracer object in finalize_tracer.

See also: meta, unwrap, TracerSchema, SaverSchema, finalize_tracer

source


# PromptingTools.is_concrete_typeMethod.

Check if a type is concrete.

source


# PromptingTools.isextractedMethod.

Check if the object is an instance of AbstractExtractedData

source


# PromptingTools.last_messageMethod.

Helpful accessor for the last message in conversation. Returns the last message in the conversation.

source


# PromptingTools.last_messageMethod.
julia
last_message(mem::ConversationMemory)

Get the last message in the conversation.

source


# PromptingTools.last_outputMethod.

Helpful accessor for the last generated output (msg.content) in conversation. Returns the last output in the conversation (eg, the string/data in the last message).

source


# PromptingTools.last_outputMethod.
julia
last_output(mem::ConversationMemory)

Get the last AI message in the conversation.

source


# PromptingTools.length_longest_common_subsequenceMethod.
julia
length_longest_common_subsequence(itr1::AbstractString, itr2::AbstractString)

Compute the length of the longest common subsequence between two string sequences (ie, the higher the number, the better the match).

Source: https://cn.julialang.org/LeetCode.jl/dev/democards/problems/problems/1143.longest-common-subsequence/

Arguments

  • itr1: The first sequence, eg, a String.

  • itr2: The second sequence, eg, a String.

Returns

The length of the longest common subsequence.

Examples

julia
text1 = "abc-abc----"
text2 = "___ab_c__abc"
longest_common_subsequence(text1, text2)
# Output: 6 (-> "abcabc")

It can be used to fuzzy match strings and find the similarity between them (Tip: normalize the match)

julia
commands = ["product recommendation", "emotions", "specific product advice", "checkout advice"]
query = "Which product can you recommend for me?"
let pos = argmax(length_longest_common_subsequence.(Ref(query), commands))
    dist = length_longest_common_subsequence(query, commands[pos])
    norm = dist / min(length(query), length(commands[pos]))
    @info "The closest command to the query: "$(query)" is: "$(commands[pos])" (distance: $(dist), normalized: $(norm))"
end

But it might be easier to use directly the convenience wrapper distance_longest_common_subsequence!



[source](https://github.com/svilupp/PromptingTools.jl/blob/045379eac021f614c82e5d8672b17d06ff5f666c/src/utils.jl#L252-L288)

</div>
<br>
<div style='border-width:1px; border-style:solid; border-color:black; padding: 1em; border-radius: 25px;'>
<a id='PromptingTools.list_aliases-Tuple{}' href='#PromptingTools.list_aliases-Tuple{}'>#</a>&nbsp;<b><u>PromptingTools.list_aliases</u></b> &mdash; <i>Method</i>.




Shows the Dictionary of model aliases in the registry. Add more with `MODEL_ALIASES[alias] = model_name`.


[source](https://github.com/svilupp/PromptingTools.jl/blob/045379eac021f614c82e5d8672b17d06ff5f666c/src/user_preferences.jl#L1263)

</div>
<br>
<div style='border-width:1px; border-style:solid; border-color:black; padding: 1em; border-radius: 25px;'>
<a id='PromptingTools.list_registry-Tuple{}' href='#PromptingTools.list_registry-Tuple{}'>#</a>&nbsp;<b><u>PromptingTools.list_registry</u></b> &mdash; <i>Method</i>.




Shows the list of models in the registry. Add more with `register_model!`.


[source](https://github.com/svilupp/PromptingTools.jl/blob/045379eac021f614c82e5d8672b17d06ff5f666c/src/user_preferences.jl#L1261)

</div>
<br>
<div style='border-width:1px; border-style:solid; border-color:black; padding: 1em; border-radius: 25px;'>
<a id='PromptingTools.load_api_keys!-Tuple{}' href='#PromptingTools.load_api_keys!-Tuple{}'>#</a>&nbsp;<b><u>PromptingTools.load_api_keys!</u></b> &mdash; <i>Method</i>.




Loads API keys from environment variables and preferences


[source](https://github.com/svilupp/PromptingTools.jl/blob/045379eac021f614c82e5d8672b17d06ff5f666c/src/user_preferences.jl#L178)

</div>
<br>
<div style='border-width:1px; border-style:solid; border-color:black; padding: 1em; border-radius: 25px;'>
<a id='PromptingTools.load_conversation-Tuple{Union{AbstractString, IO}}' href='#PromptingTools.load_conversation-Tuple{Union{AbstractString, IO}}'>#</a>&nbsp;<b><u>PromptingTools.load_conversation</u></b> &mdash; <i>Method</i>.




```julia
load_conversation(io_or_file::Union{IO, AbstractString})

Loads a conversation (messages) from io_or_file

source


# PromptingTools.load_templateMethod.
julia
load_template(io_or_file::Union{IO, AbstractString})

Loads messaging template from io_or_file and returns tuple of template messages and metadata.

source


# PromptingTools.load_templates!Function.
julia
load_templates!(dir_templates::Union{String, Nothing} = nothing;
    remember_path::Bool = true,
    remove_templates::Bool = isnothing(dir_templates),
    store::Dict{Symbol, <:Any} = TEMPLATE_STORE,
    metadata_store::Vector{<:AITemplateMetadata} = TEMPLATE_METADATA)

Loads templates from folder templates/ in the package root and stores them in TEMPLATE_STORE and TEMPLATE_METADATA.

Note: Automatically removes any existing templates and metadata from TEMPLATE_STORE and TEMPLATE_METADATA if remove_templates=true.

Arguments

  • dir_templates::Union{String, Nothing}: The directory path to load templates from. If nothing, uses the default list of paths. It usually used only once "to register" a new template storage.

  • remember_path::Bool=true: If true, remembers the path for future refresh (in TEMPLATE_PATH).

  • remove_templates::Bool=isnothing(dir_templates): If true, removes any existing templates and metadata from store and metadata_store.

  • store::Dict{Symbol, <:Any}=TEMPLATE_STORE: The store to load the templates into.

  • metadata_store::Vector{<:AITemplateMetadata}=TEMPLATE_METADATA: The metadata store to load the metadata into.

Example

Load the default templates:

julia
PT.load_templates!() # no path needed

Load templates from a new custom path:

julia
PT.load_templates!("path/to/templates") # we will remember this path for future refresh

If you want to now refresh the default templates and the new path, just call load_templates!() without any arguments.

source


# PromptingTools.metaMethod.

Extracts the metadata dictionary from the tracer message or tracer-like object.

source


# PromptingTools.ollama_apiFunction.
julia
ollama_api(prompt_schema::Union{AbstractOllamaManagedSchema, AbstractOllamaSchema},
    prompt::Union{AbstractString, Nothing} = nothing;
    system::Union{Nothing, AbstractString} = nothing,
    messages::Vector{<:AbstractMessage} = AbstractMessage[],
    endpoint::String = "generate",
    model::String = "llama2", http_kwargs::NamedTuple = NamedTuple(),
    stream::Bool = false,
    url::String = "localhost", port::Int = 11434,
    kwargs...)

Simple wrapper for a call to Ollama API.

Keyword Arguments

  • prompt_schema: Defines which prompt template should be applied.

  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage

  • system: An optional string representing the system message for the AI conversation. If not provided, a default message will be used.

  • endpoint: The API endpoint to call, only "generate" and "embeddings" are currently supported. Defaults to "generate".

  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.

  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.

  • stream: A boolean indicating whether to stream the response. Defaults to false.

  • streamcallback::Any: A callback function to handle streaming responses. Can be simply stdout or a StreamCallback object. See ?StreamCallback for details.

  • url: The URL of the Ollama API. Defaults to "localhost".

  • port: The port of the Ollama API. Defaults to 11434.

  • kwargs: Prompt variables to be used to fill the prompt/template

source


# PromptingTools.parse_toolMethod.
julia
parse_tool(datatype::Type, blob::AbstractString; kwargs...)

Parse the JSON blob into the specified datatype in try-catch mode.

If parsing fails, it tries to return the untyped JSON blob in a dictionary.

source


# PromptingTools.pprintFunction.

Utility for pretty printing PromptingTools types in REPL.

source


# PromptingTools.pprintMethod.
julia
pprint(io::IO, conversation::AbstractVector{<:AbstractMessage})

Pretty print a vector of AbstractMessage to the given IO stream.

source


# PromptingTools.pprintMethod.
julia
pprint(io::IO, msg::AbstractMessage; text_width::Int = displaysize(io)[2])

Pretty print a single AbstractMessage to the given IO stream.

text_width is the width of the text to be displayed. If not provided, it defaults to the width of the given IO stream and add newline separators as needed.

source


# PromptingTools.previewFunction.

Utility for rendering the conversation (vector of messages) as markdown. REQUIRES the Markdown package to load the extension! See also pprint

source


# PromptingTools.push_conversation!Method.
julia
push_conversation!(conv_history, conversation::AbstractVector, max_history::Union{Int, Nothing})

Add a new conversation to the conversation history and resize the history if necessary.

This function appends a conversation to the conv_history, which is a vector of conversations. Each conversation is represented as a vector of AbstractMessage objects. After adding the new conversation, the history is resized according to the max_history parameter to ensure that the size of the history does not exceed the specified limit.

Arguments

  • conv_history: A vector that stores the history of conversations. Typically, this is PT.CONV_HISTORY.

  • conversation: The new conversation to be added. It should be a vector of AbstractMessage objects.

  • max_history: The maximum number of conversations to retain in the history. If Nothing, the history is not resized.

Returns

The updated conversation history.

Example

julia
new_conversation = aigenerate("Hello World"; return_all = true)
push_conversation!(PT.CONV_HISTORY, new_conversation, 10)

This is done automatically by the ai"" macros.

source


# PromptingTools.recursive_splitterMethod.
julia
recursive_splitter(text::AbstractString, separators::Vector{String}; max_length::Int=35000) -> Vector{String}

Split a given string text into chunks recursively using a series of separators, with each chunk having a maximum length of max_length (if it's achievable given the separators provided). This function is useful for splitting large documents or texts into smaller segments that are more manageable for processing, particularly for models or systems with limited context windows.

It was previously known as split_by_length.

This is similar to Langchain's RecursiveCharacterTextSplitter. To achieve the same behavior, use separators=["\n\n", "\n", " ", ""].

Arguments

  • text::AbstractString: The text to be split.

  • separators::Vector{String}: An ordered list of separators used to split the text. The function iteratively applies these separators to split the text. Recommend to use ["\n\n", ". ", "\n", " "]

  • max_length::Int: The maximum length of each chunk. Defaults to 35,000 characters. This length is considered after each iteration of splitting, ensuring chunks fit within specified constraints.

Returns

Vector{String}: A vector of strings, where each string is a chunk of the original text that is smaller than or equal to max_length.

Usage Tips

  • I tend to prefer splitting on sentences (". ") before splitting on newline characters ("\n") to preserve the structure of the text.

  • What's the difference between separators=["\n"," ",""] and separators=["\n"," "]? The former will split down to character level (""), so it will always achieve the max_length but it will split words (bad for context!) I prefer to instead set slightly smaller max_length but not split words.

How It Works

  • The function processes the text iteratively with each separator in the provided order. It then measures the length of each chunk and splits it further if it exceeds the max_length. If the chunks is "short enough", the subsequent separators are not applied to it.

  • Each chunk is as close to max_length as possible (unless we cannot split it any further, eg, if the splitters are "too big" / there are not enough of them)

  • If the text is empty, the function returns an empty array.

  • Separators are re-added to the text chunks after splitting, preserving the original structure of the text as closely as possible. Apply strip if you do not need them.

  • The function provides separators as the second argument to distinguish itself from its single-separator counterpart dispatch.

Examples

Splitting text using multiple separators:

julia
text = "Paragraph 1\n\nParagraph 2. Sentence 1. Sentence 2.\nParagraph 3"
separators = ["\n\n", ". ", "\n"] # split by paragraphs, sentences, and newlines (not by words)
chunks = recursive_splitter(text, separators, max_length=20)

Splitting text using multiple separators - with splitting on words:

julia
text = "Paragraph 1\n\nParagraph 2. Sentence 1. Sentence 2.\nParagraph 3"
separators = ["\n\n", ". ", "\n", " "] # split by paragraphs, sentences, and newlines, words
chunks = recursive_splitter(text, separators, max_length=10)

Using a single separator:

julia
text = "Hello,World," ^ 2900  # length 34900 characters
chunks = recursive_splitter(text, [","], max_length=10000)

To achieve the same behavior as Langchain's RecursiveCharacterTextSplitter, use separators=["\n\n", "\n", " ", ""].

julia
text = "Paragraph 1\n\nParagraph 2. Sentence 1. Sentence 2.\nParagraph 3"
separators = ["\n\n", "\n", " ", ""]
chunks = recursive_splitter(text, separators, max_length=10)

source


# PromptingTools.recursive_splitterMethod.
julia
recursive_splitter(text::String; separator::String=" ", max_length::Int=35000) -> Vector{String}

Split a given string text into chunks of a specified maximum length max_length. This is particularly useful for splitting larger documents or texts into smaller segments, suitable for models or systems with smaller context windows.

There is a method for dispatching on multiple separators, recursive_splitter(text::String, separators::Vector{String}; max_length::Int=35000) -> Vector{String} that mimics the logic of Langchain's RecursiveCharacterTextSplitter.

Arguments

  • text::String: The text to be split.

  • separator::String=" ": The separator used to split the text into minichunks. Defaults to a space character.

  • max_length::Int=35000: The maximum length of each chunk. Defaults to 35,000 characters, which should fit within 16K context window.

Returns

Vector{String}: A vector of strings, each representing a chunk of the original text that is smaller than or equal to max_length.

Notes

  • The function ensures that each chunk is as close to max_length as possible without exceeding it.

  • If the text is empty, the function returns an empty array.

  • The separator is re-added to the text chunks after splitting, preserving the original structure of the text as closely as possible.

Examples

Splitting text with the default separator (" "):

julia
text = "Hello world. How are you?"
chunks = recursive_splitter(text; max_length=13)
length(chunks) # Output: 2

Using a custom separator and custom max_length

julia
text = "Hello,World," ^ 2900 # length 34900 chars
recursive_splitter(text; separator=",", max_length=10000) # for 4K context window
length(chunks[1]) # Output: 4

source


# PromptingTools.register_model!Function.
julia
register_model!(registry = MODEL_REGISTRY;
    name::String,
    schema::Union{AbstractPromptSchema, Nothing} = nothing,
    cost_of_token_prompt::Float64 = 0.0,
    cost_of_token_generation::Float64 = 0.0,
    description::String = "")

Register a new AI model with name and its associated schema.

Registering a model helps with calculating the costs and automatically selecting the right prompt schema.

Arguments

  • name: The name of the model. This is the name that will be used to refer to the model in the ai* functions.

  • schema: The schema of the model. This is the schema that will be used to generate prompts for the model, eg, OpenAISchema().

  • cost_of_token_prompt: The cost of a token in the prompt for this model. This is used to calculate the cost of a prompt. Note: It is often provided online as cost per 1000 tokens, so make sure to convert it correctly!

  • cost_of_token_generation: The cost of a token generated by this model. This is used to calculate the cost of a generation. Note: It is often provided online as cost per 1000 tokens, so make sure to convert it correctly!

  • description: A description of the model. This is used to provide more information about the model when it is queried.

source


# PromptingTools.remove_field!Method.
julia
remove_field!(parameters::AbstractDict, field::AbstractString)

Utility to remove a specific top-level field from the parameters (and the required list if present) of the JSON schema.

source


# PromptingTools.remove_julia_promptMethod.
julia
remove_julia_prompt(s::T) where {T<:AbstractString}

If it detects a julia prompt, it removes it and all lines that do not have it (except for those that belong to the code block).

source


# PromptingTools.remove_templates!Method.
julia
    remove_templates!()

Removes all templates from TEMPLATE_STORE and TEMPLATE_METADATA.

source


# PromptingTools.remove_unsafe_linesMethod.

Iterates over the lines of a string and removes those that contain a package operation or a missing import.

source


# PromptingTools.renderMethod.

Renders provided messaging template (template) under the default schema (PROMPT_SCHEMA).

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractAnthropicSchema,
    tool::ToolRef;
    kwargs...)

Renders the tool reference into the Anthropic format.

Available tools:

  • :computer: A tool for using the computer.

  • :str_replace_editor: A tool for replacing text in a string.

  • :bash: A tool for running bash commands.

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractAnthropicSchema,
    messages::Vector{<:AbstractMessage};
    aiprefill::Union{Nothing, AbstractString} = nothing,
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    cache::Union{Nothing, Symbol} = nothing,
    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that =>value in the template.

Keyword Arguments

  • aiprefill: A string to be used as a prefill for the AI response. This steer the AI response in a certain direction (and potentially save output tokens).

  • conversation: Past conversation to be included in the beginning of the prompt (for continued conversations).

  • no_system_message: If true, do not include the default system message in the conversation history OR convert any provided system message to a user message.

  • cache: A symbol representing the caching strategy to be used. Currently only nothing (no caching), :system, :tools,:last, :all_but_last, and :all are supported.

    • :system: Mark only the system message as cacheable. Best default if you have large system message and you will be sending short conversations (no replies / multi-turn conversations).

    • :all: Mark SYSTEM, one before last and LAST user message as cacheable. Best for multi-turn conversations (you write cache point as "last" and it will be read in the next turn as "preceding" cache mark).

    • :last: Mark only the last message as cacheable. Use ONLY if you want to send the SAME REQUEST multiple times (and want to save upto the last USER message). This will not work for multi-turn conversations, as the "last" message keeps moving.

    • :all_but_last: Mark SYSTEM and one before LAST USER message. Use if you have a longer conversation that you want to re-use, but you will NOT CONTINUE it (no subsequent messages/follow-ups).

    • In short, use :all for multi-turn conversations, :system for repeated single-turn conversations with same system message, and :all_but_last for longer conversations that you want to re-use, but not continue.

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractAnthropicSchema,
    tools::Vector{<:AbstractTool};
    kwargs...)

Renders the tool signatures into the Anthropic format.

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractGoogleSchema,
    messages::Vector{<:AbstractMessage};
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that =>value in the template.

Keyword Arguments

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • no_system_message::Bool=false: If true, do not include the default system message in the conversation history OR convert any provided system message to a user message.

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractOllamaManagedSchema,
    messages::Vector{<:AbstractMessage};
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that =>value in the template.

Note: Due to its "managed" nature, at most 2 messages can be provided (system and prompt inputs in the API).

Keyword Arguments

  • conversation: Not allowed for this schema. Provided only for compatibility.

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractOllamaSchema,
    messages::Vector{<:AbstractMessage};
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that =>value in the template.

Keyword Arguments

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • no_system_message: If true, do not include the default system message in the conversation history OR convert any provided system message to a user message.

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractOpenAISchema,
    messages::Vector{<:AbstractMessage};
    image_detail::AbstractString = "auto",
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    name_user::Union{Nothing, String} = nothing,
    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that =>value in the template.

Keyword Arguments

  • image_detail: Only for UserMessageWithImages. It represents the level of detail to include for images. Can be "auto", "high", or "low".

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • no_system_message: If true, do not include the default system message in the conversation history OR convert any provided system message to a user message.

  • name_user: No-op for consistency.

source


# PromptingTools.renderMethod.
julia
render(schema::AbstractOpenAISchema,
    tools::Vector{<:AbstractTool};
    json_mode::Union{Nothing, Bool} = nothing,
    kwargs...)

Renders the tool signatures into the OpenAI format.

source


# PromptingTools.renderMethod.
julia
render(tracer_schema::AbstractTracerSchema,
    conv::AbstractVector{<:AbstractMessage}; kwargs...)

Passthrough. No changes.

source


# PromptingTools.renderMethod.
julia
render(schema::NoSchema,
    messages::Vector{<:AbstractMessage};
    conversation::AbstractVector{<:AbstractMessage} = AbstractMessage[],
    no_system_message::Bool = false,
    replacement_kwargs...)

Renders a conversation history from a vector of messages with all replacement variables specified in replacement_kwargs.

It is the first pass of the prompt rendering system, and is used by all other schemas.

Keyword Arguments

  • image_detail: Only for UserMessageWithImages. It represents the level of detail to include for images. Can be "auto", "high", or "low".

  • conversation: An optional vector of AbstractMessage objects representing the conversation history. If not provided, it is initialized as an empty vector.

  • no_system_message: If true, do not include the default system message in the conversation history OR convert any provided system message to a user message.

Notes

  • All unspecified kwargs are passed as replacements such that =>value in the template.

  • If a SystemMessage is missing, we inject a default one at the beginning of the conversation.

  • Only one SystemMessage is allowed (ie, cannot mix two conversations different system prompts).

source


# PromptingTools.replace_wordsMethod.
julia
replace_words(text::AbstractString, words::Vector{<:AbstractString}; replacement::AbstractString="ABC")

Replace all occurrences of words in words with replacement in text. Useful to quickly remove specific names or entities from a text.

Arguments

  • text::AbstractString: The text to be processed.

  • words::Vector{<:AbstractString}: A vector of words to be replaced.

  • replacement::AbstractString="ABC": The replacement string to be used. Defaults to "ABC".

Example

julia
text = "Disney is a great company"
replace_words(text, ["Disney", "Snow White", "Mickey Mouse"])
# Output: "ABC is a great company"

source


# PromptingTools.resize_conversation!Method.
julia
resize_conversation!(conv_history, max_history::Union{Int, Nothing})

Resize the conversation history to a specified maximum length.

This function trims the conv_history to ensure that its size does not exceed max_history. It removes the oldest conversations first if the length of conv_history is greater than max_history.

Arguments

  • conv_history: A vector that stores the history of conversations. Typically, this is PT.CONV_HISTORY.

  • max_history: The maximum number of conversations to retain in the history. If Nothing, the history is not resized.

Returns

The resized conversation history.

Example

julia
resize_conversation!(PT.CONV_HISTORY, PT.MAX_HISTORY_LENGTH)

After the function call, conv_history will contain only the 10 most recent conversations.

This is done automatically by the ai"" macros.

source


# PromptingTools.response_to_messageMethod.
julia
response_to_message(schema::AbstractOpenAISchema,
    MSG::Type{AIMessage},
    choice,
    resp;
    model_id::AbstractString = "",
    time::Float64 = 0.0,
    run_id::Int = Int(rand(Int32)),
    sample_id::Union{Nothing, Integer} = nothing,
    name_assistant::Union{Nothing, String} = nothing)

Utility to facilitate unwrapping of HTTP response to a message type MSG provided for OpenAI-like responses

Note: Extracts finish_reason and log_prob if available in the response.

Arguments

  • schema::AbstractOpenAISchema: The schema for the prompt.

  • MSG::Type{AIMessage}: The message type to be returned.

  • choice: The choice from the response (eg, one of the completions).

  • resp: The response from the OpenAI API.

  • model_id::AbstractString: The model ID to use for generating the response. Defaults to an empty string.

  • time::Float64: The elapsed time for the response. Defaults to 0.0.

  • run_id::Integer: The run ID for the response. Defaults to a random integer.

  • sample_id::Union{Nothing, Integer}: The sample ID for the response (if there are multiple completions). Defaults to nothing.

  • name_assistant::Union{Nothing, String}: The name to use for the assistant in the conversation history. Defaults to nothing.

source


# PromptingTools.response_to_messageMethod.

Utility to facilitate unwrapping of HTTP response to a message type MSG provided. Designed to handle multi-sample completions.

source


# PromptingTools.save_conversationMethod.
julia
save_conversation(io_or_file::Union{IO, AbstractString},
    messages::AbstractVector{<:AbstractMessage})

Saves provided conversation (messages) to io_or_file. If you need to add some metadata, see save_template.

source


# PromptingTools.save_conversationsMethod.
julia
save_conversations(schema::AbstractPromptSchema, filename::AbstractString,
    conversations::Vector{<:AbstractVector{<:PT.AbstractMessage}})

Saves provided conversations (vector of vectors of messages) to filename rendered in the particular schema.

Commonly used for finetuning models with schema = ShareGPTSchema()

The format is JSON Lines, where each line is a JSON object representing one provided conversation.

See also: save_conversation

Examples

You must always provide a VECTOR of conversations

julia
messages = AbstractMessage[SystemMessage("System message 1"),
    UserMessage("User message"),
    AIMessage("AI message")]
conversation = [messages] # vector of vectors

dir = tempdir()
fn = joinpath(dir, "conversations.jsonl")
save_conversations(fn, conversation)

# Content of the file (one line for each conversation)
# {"conversations":[{"value":"System message 1","from":"system"},{"value":"User message","from":"human"},{"value":"AI message","from":"gpt"}]}

source


# PromptingTools.save_templateMethod.
julia
save_template(io_or_file::Union{IO, AbstractString},
    messages::AbstractVector{<:AbstractChatMessage};
    content::AbstractString = "Template Metadata",
    description::AbstractString = "",
    version::AbstractString = "1",
    source::AbstractString = "")

Saves provided messaging template (messages) to io_or_file. Automatically adds metadata based on provided keyword arguments.

source


# PromptingTools.set_preferences!Method.
julia
set_preferences!(pairs::Pair{String, <:Any}...)

Set preferences for PromptingTools. See ?PREFERENCES for more information.

See also: get_preferences

Example

Change your API key and default model:

julia
PromptingTools.set_preferences!("OPENAI_API_KEY" => "key1", "MODEL_CHAT" => "chat1")

source


# PromptingTools.set_properties_strict!Method.
julia
set_properties_strict!(properties::AbstractDict)

Sets strict mode for the properties of a JSON schema.

Changes:

  • Sets additionalProperties to false.

  • All keys must be included in required.

  • All optional keys will have null added to their type.

Reference: https://platform.openai.com/docs/guides/structured-outputs/supported-schemas

source


# PromptingTools.tool_call_signatureMethod.
julia
tool_call_signature(
    type_or_method::Union{Type, Method}; strict::Union{Nothing, Bool} = nothing,
    max_description_length::Int = 200, name::Union{Nothing, String} = nothing,
    docs::Union{Nothing, String} = nothing, hidden_fields::AbstractVector{<:Union{
        AbstractString, Regex}} = String[])

Extract the argument names, types and docstrings from a struct to create the function call signature in JSON schema.

You must provide a Struct type (not an instance of it) with some fields. The types must be CONCRETE, it helps with correct conversion to JSON schema and then conversion back to the struct.

Note: Fairly experimental, but works for combination of structs, arrays, strings and singletons.

Arguments

  • type_or_method::Union{Type, Method}: The struct type or method to extract the signature from.

  • strict::Union{Nothing, Bool}: Whether to enforce strict mode for the schema. Defaults to nothing.

  • max_description_length::Int: Maximum length for descriptions. Defaults to 200.

  • name::Union{Nothing, String}: The name of the tool. Defaults to the name of the struct.

  • docs::Union{Nothing, String}: The description of the tool. Defaults to the docstring of the struct/overall function.

  • hidden_fields::AbstractVector{<:Union{AbstractString, Regex}}: A list of fields to hide from the LLM (eg, ["ctx_user_id"] or r"ctx").

Returns

  • Dict{String, AbstractTool}: A dictionary representing the function call signature schema.

Tips

  • You can improve the quality of the extraction by writing a helpful docstring for your struct (or any nested struct). It will be provided as a description.

You can even include comments/descriptions about the individual fields.

  • All fields are assumed to be required, unless you allow null values (eg, ::Union{Nothing, Int}). Fields with Nothing will be treated as optional.

  • Missing values are ignored (eg, ::Union{Missing, Int} will be treated as Int). It's for broader compatibility and we cannot deserialize it as easily as Nothing.

Example

Do you want to extract some specific measurements from a text like age, weight and height? You need to define the information you need as a struct (return_type):

struct MyMeasurement
    age::Int
    height::Union{Int,Nothing}
    weight::Union{Nothing,Float64}
end
tool_map = tool_call_signature(MyMeasurement)
#
# Dict{String, PromptingTools.AbstractTool}("MyMeasurement" => PromptingTools.Tool
#   name: String "MyMeasurement"
#   parameters: Dict{String, Any}
#   description: Nothing nothing
#   strict: Nothing nothing
#   callable: MyMeasurement <: Any
"

You can see that only the field age does not allow null values, hence, it's "required". While height and weight are optional.

tool_map["MyMeasurement"].parameters["required"]
# ["age"]

If there are multiple items you want to extract, define a wrapper struct to get a Vector of MyMeasurement:

struct MyMeasurementWrapper
    measurements::Vector{MyMeasurement}
end

Or if you want your extraction to fail gracefully when data isn't found, use `MaybeExtract{T}` wrapper (inspired by Instructor package!):

using PromptingTools: MaybeExtract

type = MaybeExtract

Effectively the same as:

struct MaybeExtract{T}

result::Union{T, Nothing}

error::Bool // true if a result is found, false otherwise

message::Union{Nothing, String} // Only present if no result is found, should be short and concise

end

If LLM extraction fails, it will return a Dict with error and message fields instead of the result!

msg = aiextract("Extract measurements from the text: I am giraffe", type)


Dict{Symbol, Any} with 2 entries:

:message => "Sorry, this feature is only available for humans."

:error => true

That way, you can handle the error gracefully and get a reason why extraction failed.

You can also hide certain fields in your function call signature with Strings or Regex patterns (eg, `r"ctx"`).

tool_map = tool_call_signature(MyMeasurement; hidden_fields = ["ctx_user_id"]) ```

source


# PromptingTools.tool_call_signatureMethod.
julia
tool_call_signature(fields::Vector;
    strict::Union{Nothing, Bool} = nothing, max_description_length::Int = 200, name::Union{
        Nothing, String} = nothing,
    docs::Union{Nothing, String} = nothing)

Generate a function call signature schema for a dynamically generated struct based on the provided fields.

Arguments

  • fields::Vector{Union{Symbol, Pair{Symbol, Type}, Pair{Symbol, String}}}: A vector of field names or pairs of field name and type or string description, eg, [:field1, :field2, :field3] or [:field1 => String, :field2 => Int, :field3 => Float64] or [:field1 => String, :field1__description => "Field 1 has the name"].

  • strict::Union{Nothing, Bool}: Whether to enforce strict mode for the schema. Defaults to nothing.

  • max_description_length::Int: Maximum length for descriptions. Defaults to 200.

  • name::Union{Nothing, String}: The name of the tool. Defaults to the name of the struct.

  • docs::Union{Nothing, String}: The description of the tool. Defaults to the docstring of the struct/overall function.

Returns a tool_map with the tool name as the key and the tool object as the value.

See also generate_struct, aiextract, update_field_descriptions!.

Examples

julia
tool_map = tool_call_signature([:field1, :field2, :field3])

With the field types:

julia
tool_map = tool_call_signature([:field1 => String, :field2 => Int, :field3 => Float64])

And with the field descriptions:

julia
tool_map = tool_call_signature([:field1 => String, :field1__description => "Field 1 has the name"])

source


# PromptingTools.tool_callsMethod.

Get the vector of tool call requests from an AIToolRequest/message.

source


# PromptingTools.unique_permutationMethod.
julia
unique_permutation(inputs::AbstractVector)

Returns indices of unique items in a vector inputs. Access the unique values as inputs[unique_permutation(inputs)].

source


# PromptingTools.unwrapMethod.

Unwraps the tracer message or tracer-like object, returning the original object.

source


# PromptingTools.update_field_descriptions!Method.
julia
update_field_descriptions!(
    parameters::Dict{String, <:Any}, descriptions::Dict{Symbol, <:AbstractString};
    max_description_length::Int = 200)

Update the given JSON schema with descriptions from the descriptions dictionary. This function modifies the schema in-place, adding a "description" field to each property that has a corresponding entry in the descriptions dictionary.

Note: It modifies the schema in place. Only the top-level "properties" are updated!

Returns: The modified schema dictionary.

Arguments

  • parameters: A dictionary representing the JSON schema to be updated.

  • descriptions: A dictionary mapping field names (as symbols) to their descriptions.

  • max_description_length::Int: Maximum length for descriptions. Defaults to 200.

Examples

julia
    parameters = Dict{String, Any}(
        "properties" => Dict{String, Any}(
            "location" => Dict{String, Any}("type" => "string"),
            "condition" => Dict{String, Any}("type" => "string"),
            "temperature" => Dict{String, Any}("type" => "number")
        ),
        "required" => ["location", "temperature", "condition"],
        "type" => "object"
    )
    descriptions = Dict{Symbol, String}(
        :temperature => "Temperature in degrees Fahrenheit",
        :condition => "Current weather condition (e.g., sunny, rainy, cloudy)"
    )
    update_field_descriptions!(parameters, descriptions)

source


# PromptingTools.wrap_stringFunction.
julia
wrap_string(str::String,
    text_width::Int = 20;
    newline::Union{AbstractString, AbstractChar} = '

')

Breaks a string into lines of a given text_width. Optionally, you can specify the newline character or string to use.

Example:

julia
wrap_string("Certainly, here's a function in Julia that will wrap a string according to the specifications:", 10) |> print

source


# PromptingTools.@aai_strMacro.
julia
aai"user_prompt"[model_alias] -> AIMessage

Asynchronous version of @ai_str macro, which will log the result once it's ready.

See also aai!"" if you want an asynchronous reply to the provided message / continue the conversation.

Example

Send asynchronous request to GPT-4, so we don't have to wait for the response: Very practical with slow models, so you can keep working in the meantime.

julia

**...with some delay...**

**[ Info: Tokens: 29 @ Cost: 0.0011
 in 2.7 seconds**

**[ Info: AIMessage> Hello! How can I assist you today?**


[source](https://github.com/svilupp/PromptingTools.jl/blob/045379eac021f614c82e5d8672b17d06ff5f666c/src/macros.jl#L99-L116)

</div>
<br>
<div style='border-width:1px; border-style:solid; border-color:black; padding: 1em; border-radius: 25px;'>
<a id='PromptingTools.@ai!_str-Tuple{Any, Vararg{Any}}' href='#PromptingTools.@ai!_str-Tuple{Any, Vararg{Any}}'>#</a>&nbsp;<b><u>PromptingTools.@ai!_str</u></b> &mdash; <i>Macro</i>.




```julia
ai!"user_prompt"[model_alias] -> AIMessage

The ai!"" string macro is used to continue a previous conversation with the AI model.

It appends the new user prompt to the last conversation in the tracked history (in PromptingTools.CONV_HISTORY) and generates a response based on the entire conversation context. If you want to see the previous conversation, you can access it via PromptingTools.CONV_HISTORY, which keeps at most last PromptingTools.MAX_HISTORY_LENGTH conversations.

Arguments

  • user_prompt (String): The new input prompt to be added to the existing conversation.

  • model_alias (optional, any): Specify the model alias of the AI model to be used (see MODEL_ALIASES). If not provided, the default model is used.

Returns

AIMessage corresponding to the new user prompt, considering the entire conversation history.

Example

To continue a conversation:

julia
# start conversation as normal
ai"Say hi." 

# ... wait for reply and then react to it:

# continue the conversation (notice that you can change the model, eg, to more powerful one for better answer)
ai!"What do you think about that?"gpt4t
# AIMessage("Considering our previous discussion, I think that...")

Usage Notes

  • This macro should be used when you want to maintain the context of an ongoing conversation (ie, the last ai"" message).

  • It automatically accesses and updates the global conversation history.

  • If no conversation history is found, it raises an assertion error, suggesting to initiate a new conversation using ai"" instead.

Important

Ensure that the conversation history is not too long to maintain relevancy and coherence in the AI's responses. The history length is managed by MAX_HISTORY_LENGTH.

source


# PromptingTools.@ai_strMacro.
julia
ai"user_prompt"[model_alias] -> AIMessage

The ai"" string macro generates an AI response to a given prompt by using aigenerate under the hood.

See also ai!"" if you want to reply to the provided message / continue the conversation.

Arguments

  • user_prompt (String): The input prompt for the AI model.

  • model_alias (optional, any): Provide model alias of the AI model (see MODEL_ALIASES).

Returns

AIMessage corresponding to the input prompt.

Example

julia
result = ai"Hello, how are you?"
# AIMessage("Hello! I'm an AI assistant, so I don't have feelings, but I'm here to help you. How can I assist you today?")

If you want to interpolate some variables or additional context, simply use string interpolation:

julia
a=1
result = ai"What is `$a+$a`?"
# AIMessage("The sum of `1+1` is `2`.")

If you want to use a different model, eg, GPT-4, you can provide its alias as a flag:

julia
result = ai"What is `1.23 * 100 + 1`?"gpt4t
# AIMessage("The answer is 124.")

source


# PromptingTools.@timeoutMacro.
julia
@timeout(seconds, expr_to_run, expr_when_fails)

Simple macro to run an expression with a timeout of seconds. If the expr_to_run fails to finish in seconds seconds, expr_when_fails is returned.

Example

julia
x = @timeout 1 begin
    sleep(1.1)
    println("done")
    1
end "failed"

source