Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.meetcuria.com/llms.txt

Use this file to discover all available pages before exploring further.

Agent definitions live in agents/<name>.yaml. Each file defines an agent’s identity, model configuration, system prompt, and skill access.

Schema reference

# agents/<name>.yaml

# Required — unique identifier for this agent
name: string

# Required — "coordinator" or "specialist"
role: coordinator | specialist

# Required — human-readable description of the agent's purpose
description: string

# Required — LLM model configuration
model:
  provider: anthropic | openai | ollama   # LLM provider
  model: string                            # model identifier (e.g. "claude-sonnet-4-6")

# Required — the system prompt injected on every task
# Supports ${variable} interpolation for runtime values
system_prompt: |
  Multi-line system prompt text...

# Required — skills this agent can always invoke
# Skills not in this list are only available if allow_discovery is true
pinned_skills:
  - skill-name-1
  - skill-name-2

# Optional — whether the agent can discover and invoke skills not in pinned_skills
# Default: false
allow_discovery: boolean

# Optional — inject the list of available specialist agents into the system prompt
# Used by the coordinator to know which specialists it can delegate to
# Default: false
inject_specialists: boolean

# Optional — custom handler for agent-specific logic
# Path relative to the agent YAML file
handler: ./custom-handler.ts

# Optional — memory scope configuration
# Scopes isolate this agent's working memory from other agents
memory:
  scopes:
    - scope-name

Field details

name

Unique identifier used for routing, delegation, and logging. Must be unique across all agent definitions. Convention: lowercase kebab-case.

role

ValueMeaning
coordinatorReceives all inbound messages. Only one coordinator should exist.
specialistReceives tasks only via delegation from the coordinator or other agents.

model

The LLM provider and model used for this agent’s reasoning. Currently supported providers:
ProviderExample models
anthropicclaude-sonnet-4-6, claude-haiku-4-5-20251001
openaigpt-4o, gpt-4o-mini
ollamaAny locally-hosted model

system_prompt

The prompt injected at the start of every task. Supports runtime variable interpolation using ${variable} syntax:
VariableValue
${office_identity_block}Agent persona (name, title, signature) from office identity config
${executive_voice_block}CEO’s writing voice profile
${agent_contact_id}The agent’s own contact UUID
${available_specialists}List of specialist agents available for delegation

pinned_skills

Array of skill names this agent can always invoke. Skills must be registered in the skill registry. If a skill in this list doesn’t exist at startup, a warning is logged but the agent still starts.

allow_discovery

When true, the agent can use the skill-registry skill to search for and invoke skills not in its pinned_skills list. The coordinator typically has this enabled; specialists typically do not (they operate with a fixed, auditable skill set).

memory.scopes

Memory scopes isolate an agent’s working memory. Each scope is an independent namespace — facts stored in one scope are not visible to agents using a different scope. The coordinator uses no explicit scope (it shares the default scope). Specialists like email-triage and research-analyst use their own scopes to prevent cross-contamination.

Built-in agents

The agents Curia ships with and how they work together.

Building custom agents

Step-by-step guide to creating a new agent.