API Reference
API Reference
Core Functions
load_prompt(path, *, meta=None)
Load a single prompt file.
Parameters:
-
path(strPath): Path to the prompt file -
meta(MetadataModestr None): Metadata handling mode - “strict”, “allow”, “ignore”, or None (uses global config, which defaults to ALLOW)
You can also set the environment variable TEXTPROMPTS_METADATA_MODE before
importing the package to override the default ALLOW mode.
Returns: Prompt object
Raises: TextPromptsError subclasses on any failure
Example:
from textprompts import load_prompt
prompt = load_prompt("prompts/greeting.txt")
print(prompt.meta.title)
print(prompt.prompt)
save_prompt(path, content, *, format="toml")
Save a prompt to a file.
Parameters:
-
path(strPath): File path to save the prompt to -
content(strPrompt): Either a string (creates template with required fields) or a Prompt object format(str): Front-matter format to use -"toml"(default) or"yaml"
Example:
from pathlib import Path
from textprompts import Prompt, PromptMeta, save_prompt
# Save a simple prompt with metadata template
save_prompt("my_prompt.txt", "You are a helpful assistant.")
# Save with YAML front-matter
save_prompt("my_prompt.txt", "You are a helpful assistant.", format="yaml")
# Save a Prompt object with full metadata
meta = PromptMeta(title="Assistant", version="1.0.0", description="A helpful AI")
prompt = Prompt(path=Path("my_prompt.txt"), meta=meta, prompt="You are a helpful assistant.")
save_prompt("my_prompt.txt", prompt)
Data Classes
Prompt
Represents a loaded prompt with metadata and content.
Fields:
path(Path): Path to the source filemeta(PromptMeta): Parsed metadata (always present, uses filename as title if no front-matter)prompt(PromptString): The prompt content as a PromptString
Example:
prompt = load_prompt("example.txt")
# or
prompt = Prompt.from_path("example.txt")
print(prompt.path) # PosixPath('example.txt')
print(prompt.meta.title) # "Example Prompt"
print(prompt.prompt) # PromptString("Hello {name}!")
PromptMeta
Metadata extracted from the TOML or YAML frontmatter.
Fields:
-
title(strNone): Human-readable name (required in front-matter, or filename if no front-matter) -
version(strNone): Version string (required in front-matter if present, can be empty) -
author(strNone): Author name (optional) -
created(dateNone): Creation date (optional) -
description(strNone): Description (required in front-matter if present, can be empty)
Example:
meta = prompt.meta
print(meta.title) # "Customer Support"
print(meta.version) # "1.0.0"
print(meta.author) # "Support Team"
PromptString
A string subclass that validates format() calls to ensure all placeholders are provided.
Attributes:
placeholders(set[str]): Set of placeholder names found in the string
Methods:
format(*args, skip_validation=False, **kwargs): Format the string- By default, raises ValueError if any placeholder is missing
- With
skip_validation=True, performs partial formatting
- All standard string methods are available
Example:
from textprompts import PromptString
template = PromptString("Hello {name}, you are {age} years old")
print(template.placeholders) # {'name', 'age'}
# ✅ Strict formatting (default) - all placeholders required
result = template.format(name="Alice", age=30)
# ❌ This raises ValueError
result = template.format(name="Alice") # Missing 'age'
# ✅ Partial formatting with skip_validation
partial = template.format(name="Alice", skip_validation=True)
print(partial) # "Hello Alice, you are {age} years old"
Section Parsing APIs
Parse prompt structure directly from text without loading a Prompt.
Functions:
parse_sections(text: str | bytes) -> ParseResultgenerate_slug(heading: str) -> strinject_anchors(text: str | bytes) -> tuple[str, ParseResult]render_toc(result: ParseResult, path: str) -> str
ParseResult fields:
sections: Ordered section tree withkind,tag_name,heading,anchor_id,level,start_line,end_line,char_count,parent_idx,children, andlinksanchors: Canonical anchor id to first section indexduplicate_anchors: Explicit duplicate anchors preserved by the parserfrontmatter: Detected YAML/TOML frontmatter block, if presenttotal_chars: UTF-8 byte count of the body after frontmatter
Anchor behavior:
- Anchor ids normalize to lowercase underscore form, collapsing non-alphanumeric runs to
_ generate_slug("My Section")returnsmy_section- Generic XML sections use the normalized tag name as the anchor id when no explicit
idis present
Example:
from textprompts import parse_sections, render_toc
result = parse_sections("## Intro\n\nSee [Docs](#docs).")
print(result.sections[0].links[0].fragment) # "docs"
print(render_toc(result, "prompt.txt"))
Exception Classes
TextPromptsError
Base exception class for all TextPrompts errors.
FileMissingError
Raised when a requested file doesn’t exist.
Example:
try:
prompt = load_prompt("nonexistent.txt")
except FileMissingError as e:
print(f"File not found: {e}")
MissingMetadataError
Raised when metadata is required but not found.
Example:
try:
prompt = load_prompt("no_metadata.txt", meta="strict") # requires metadata
except MissingMetadataError as e:
print(f"Missing metadata: {e}")
InvalidMetadataError
Raised when metadata is malformed or contains invalid values.
Example:
try:
prompt = load_prompt("bad_toml.txt")
except InvalidMetadataError as e:
print(f"Invalid metadata: {e}")
MalformedHeaderError
Raised when the frontmatter delimiters are malformed.
Example:
try:
prompt = load_prompt("malformed.txt")
except MalformedHeaderError as e:
print(f"Malformed header: {e}")
CLI Interface
textprompts command
Load and display prompts from the command line.
Usage:
textprompts [OPTIONS] PATHS...
Options:
--json: Output JSON metadata instead of prompt body
Examples:
# Display a single prompt
textprompts prompts/greeting.txt
# Display prompt metadata as JSON
textprompts --json prompts/greeting.txt
Type Hints
TextPrompts is fully typed and includes a py.typed marker file. All public APIs include comprehensive type hints for optimal IDE support.
from textprompts import load_prompt, Prompt, PromptMeta
from pathlib import Path
# These are properly typed
prompt: Prompt = load_prompt("file.txt")
meta: PromptMeta | None = prompt.meta
path: Path = prompt.path
Performance Considerations
Caching
For applications that load the same prompts repeatedly, consider caching:
from functools import lru_cache
@lru_cache(maxsize=None)
def get_cached_prompt(path: str):
return load_prompt(path)
Memory Usage
Each Prompt object stores the full file content in memory. For very large prompt files, consider streaming approaches or lazy loading patterns.