Agent Module¶
The cogent.agent module defines the core agent abstraction - autonomous entities that can think, act, and communicate.
Overview¶
Agents are the primary actors in the system. Each agent has: - A unique identity and role - Configuration defining its capabilities - Runtime state tracking its activity - Access to tools and the event bus
from cogent import Agent
# Simple string model
agent = Agent(
name="Researcher",
model="gpt4", # Auto-resolves to gpt-5.4
tools=[search_tool],
instructions="You are a research assistant.",
)
# With provider prefix
agent = Agent(
name="Researcher",
model="anthropic:claude", # Explicit provider
tools=[search_tool],
)
# Medium-level: Factory function
from cogent.models import create_chat
agent = Agent(
name="Researcher",
model=create_chat("gpt4"),
tools=[search_tool],
)
# Low-level: Full control
from cogent.models import OpenAIChat
model = OpenAIChat(model="gpt-5.4", temperature=0.7)
agent = Agent(
name="Researcher",
model=model,
tools=[search_tool],
)
result = await agent.run("Find information about quantum computing")
Core Classes¶
Agent¶
The main agent class with multiple construction patterns:
from cogent import Agent
# Simplified API (recommended)
agent = Agent(
name="Writer",
model="gpt4", # String model - auto-resolves to gpt-5.4
role="worker", # String: "worker", "supervisor", "autonomous", "reviewer"
tools=[write_tool],
instructions="You write compelling content.",
)
# With provider prefix for other providers
agent = Agent(
name="Writer",
model="anthropic:claude-sonnet-4",
role="worker",
)
# Advanced API with AgentConfig
from cogent.agent import AgentConfig
from cogent.core.enums import AgentRole
from cogent.models import create_chat
config = AgentConfig(
name="Writer",
role=AgentRole.WORKER,
model=create_chat("gpt4"),
tools=["write_poem", "write_story"],
resilience_config=ResilienceConfig.aggressive(),
)
agent = Agent(config=config)
RoleConfig Objects (Recommended)¶
Use role configuration objects for type-safe, immutable role definitions:
from cogent import (
SupervisorRole,
WorkerRole,
ReviewerRole,
AutonomousRole,
CustomRole,
)
# Supervisor - coordinates workers
supervisor = Agent(
name="Manager",
model="gpt4", # String model
role=SupervisorRole(workers=["Analyst", "Writer"]),
)
# Worker - executes tasks with tools
worker = Agent(
name="Analyst",
model="claude", # Alias for claude-sonnet-4
role=WorkerRole(specialty="data analysis and visualization"),
tools=[search, analyze],
)
# Reviewer - evaluates and approves work
reviewer = Agent(
name="QA",
model="gemini-pro", # Alias for gemini-2.5-pro
role=ReviewerRole(criteria=["accuracy", "clarity", "completeness"]),
)
# Autonomous - independent agent with full capabilities
autonomous = Agent(
name="Assistant",
model="anthropic:claude-opus-4", # Provider prefix
role=AutonomousRole(),
) tools=[search, write],
)
# Custom - hybrid role with explicit capability overrides
custom = Agent(
name="TechnicalReviewer",
model=model,
role=CustomRole(
base_role=AgentRole.REVIEWER,
can_use_tools=True, # Reviewer that can use tools!
),
tools=[code_analyzer, linter],
)
Benefits of RoleConfig objects: - Type-safe configuration - Immutable (frozen dataclasses) - Built-in prompt enhancement - Clear, explicit role definitions - IDE autocomplete and type checking
Role-Specific Parameters (Backward Compatible)¶
You can also use string/enum roles with parameters:
# Supervisor - with team members
supervisor = Agent(
name="Manager",
model=model,
role=AgentRole.SUPERVISOR, # or "supervisor"
workers=["analyst", "writer"], # Adds team members to prompt
)
# Worker - with specialty description
worker = Agent(
name="Analyst",
model=model,
role="worker",
specialty="data analysis and visualization", # Adds specialty to prompt
tools=[search, analyze],
)
# Reviewer - with evaluation criteria
reviewer = Agent(
name="QA",
model=model,
role="reviewer",
criteria=["accuracy", "clarity", "completeness"], # Adds criteria to prompt
)
# Autonomous - works independently, can finish
autonomous = Agent(
name="Assistant",
model=model,
role="autonomous",
tools=[search, write],
)
Note: While backward compatible, RoleConfig objects are recommended for new code.
Custom Roles (Capability Overrides)¶
Recommended: Use CustomRole for hybrid capabilities:
from cogent import CustomRole
from cogent.core import AgentRole
# Reviewer that can use tools
hybrid_reviewer = Agent(
name="TechnicalReviewer",
model=model,
role=CustomRole(
base_role=AgentRole.REVIEWER,
can_use_tools=True, # Override!
),
tools=[code_analyzer, linter],
)
# Worker that can finish and delegate
orchestrator = Agent(
name="Orchestrator",
model=model,
role=CustomRole(
base_role=AgentRole.WORKER,
can_finish=True, # Override!
can_delegate=True, # Override!
),
tools=[deployment_tool],
)
Backward compatible: You can also use capability overrides with string/enum roles:
# Hybrid: Reviewer that can use tools
hybrid_reviewer = Agent(
name="TechnicalReviewer",
model=model,
role="reviewer",
can_use_tools=True, # Override! Reviewer normally can't use tools
tools=[code_analyzer, linter],
)
# Custom orchestrator: Worker that can finish and delegate
orchestrator = Agent(
name="Orchestrator",
model=model,
role="worker",
can_finish=True, # Override! Worker normally can't finish
can_delegate=True, # Override! Worker normally can't delegate
tools=[deployment_tool],
)
Role System¶
Roles define capabilities (what an agent CAN do) and inject system prompts that guide LLM behavior. They don't define personalities - that comes from your instructions.
Role Capabilities¶
┌─────────────┬────────────┬──────────────┬───────────────┐
│ Role │ can_finish │ can_delegate │ can_use_tools │
├─────────────┼────────────┼──────────────┼───────────────┤
│ WORKER │ ❌ │ ❌ │ ✅ │
│ SUPERVISOR │ ✅ │ ✅ │ ❌ │
│ AUTONOMOUS │ ✅ │ ❌ │ ✅ │
│ REVIEWER │ ✅ │ ❌ │ ❌ │
└─────────────┴────────────┴──────────────┴───────────────┘
When to Use: - WORKER: Executes tasks with tools, reports back - SUPERVISOR: Coordinates workers, makes final decisions - AUTONOMOUS: Independent operation, full lifecycle - REVIEWER: Evaluates work, approves/rejects
How Roles Work¶
Roles affect agent behavior in two ways:
1. Capability Controls - What the agent is allowed to do:
# WORKER can use tools but cannot finish
worker = Agent(name="Analyst", model=model, role="worker", tools=[analyze_tool])
assert worker.can_use_tools == True
assert worker.can_finish == False # Must report to supervisor
# AUTONOMOUS can use tools AND finish
autonomous = Agent(name="Assistant", model=model, role="autonomous", tools=[search_tool])
assert autonomous.can_use_tools == True
assert autonomous.can_finish == True # Can conclude independently
2. System Prompt Injection - How the LLM thinks:
Each role gets a specialized system prompt that guides its behavior:
- WORKER: "Execute tasks using tools... You cannot finish the workflow yourself"
- SUPERVISOR: "Delegate tasks to workers... Provide FINAL ANSWER when complete"
- AUTONOMOUS: "Work independently... Finish when the task is complete"
- REVIEWER: "Evaluate work quality... Approve or request revisions"
Example - See the difference:
# WORKER won't conclude
worker = Agent(name="Worker", model=model, role="worker")
result = await worker.run("What is Python?")
# Response: "Python is a programming language..." (no conclusion)
# AUTONOMOUS will conclude
autonomous = Agent(name="Assistant", model=model, role="autonomous")
result = await autonomous.run("What is Python?")
# Response: "FINAL ANSWER: Python is a high-level programming language..."
When to Use Each Role¶
WORKER - Task execution:
# ✅ Good: Has tools, reports results
data_analyst = Agent(
name="DataAnalyst",
model=model,
role="worker",
tools=[load_data, analyze, plot],
instructions="Analyze datasets and create visualizations",
)
# In multi-agent setup, supervisor coordinates workers
SUPERVISOR - Team coordination:
# ✅ Good: Delegates to workers, makes final decisions
manager = Agent(
name="Manager",
model=model,
role="supervisor",
instructions="Coordinate the research team to deliver comprehensive reports",
)
# LLM will try to delegate: "DELEGATE TO researcher: Find information about..."
AUTONOMOUS - Independent agents:
# ✅ Good: Standalone assistant, full capability
assistant = Agent(
name="Assistant",
model=model,
role="autonomous",
tools=[search, calculator, send_email],
instructions="Help users with their requests",
)
# Can use tools AND provide final answers independently
REVIEWER - Quality control:
# ✅ Good: Evaluates quality, no tool execution
qa = Agent(
name="QualityAssurance",
model=model,
role="reviewer",
instructions="Review code for quality, security, and best practices",
)
# LLM focuses on judgment: "FINAL ANSWER: Approved" or "REVISION NEEDED: ..."
Capability Overrides¶
Override role capabilities when needed:
# Hybrid: Reviewer that can use tools
tech_reviewer = Agent(
name="TechnicalReviewer",
model=model,
role="reviewer",
can_use_tools=True, # Override! Run automated checks
tools=[lint_code, run_tests],
)
See examples/basics/role_behavior.py for real LLM behavior examples.
TaskBoard¶
Enable task tracking for complex, multi-step work:
agent = Agent(
name="ProjectManager",
model="gpt-5.4-mini",
instructions="You are a helpful project manager.",
taskboard=True, # Enables task tracking tools
)
result = await agent.run("Plan a REST API for a todo app")
# Check taskboard after execution
print(agent.taskboard.summary())
TaskBoard Tools¶
When taskboard=True, the agent gets these tools:
| Tool | Description |
|---|---|
add_task |
Create a new task to track |
update_task |
Update task status (pending, in_progress, completed, failed, blocked) |
add_note |
Record observations and findings |
verify_task |
Verify a task was completed correctly |
get_taskboard_status |
See overall progress |
How It Works¶
- Instructions injected — Agent receives guidance on when/how to use taskboard
- LLM decides — For complex tasks, the agent breaks them into subtasks
- Progress tracked — Tasks have status, notes, and verification
- Summary available —
agent.taskboard.summary()shows progress
TaskBoard Configuration¶
from cogent.agent.taskboard import TaskBoardConfig
agent = Agent(
name="Worker",
model="gpt4",
taskboard=TaskBoardConfig(
include_instructions=True, # Inject usage instructions (default: True)
max_tasks=50, # Maximum tasks to track
track_time=True, # Track task timing
),
)
See examples/advanced/taskboard.py for a complete example.
Memory (4-Layer Architecture)¶
Cogent provides a 4-layer memory architecture:
| Layer | Parameter | Purpose |
|---|---|---|
| 1 | conversation=True |
Thread-based message history (default ON) |
| 2 | acc=True |
Agentic Context Compression - prevents drift |
| 3 | memory=True |
Long-term memory with remember/recall tools |
| 4 | cache=True |
Semantic cache for tool outputs |
See Memory Module for detailed explanation of how each layer works.
Layer 3: Long-term memory with tools¶
agent = Agent(name="Assistant", model="gpt4", memory=True)
Agent gets remember(), recall(), forget() tools¶
Layer 4: Semantic cache for expensive tools¶
agent = Agent(name="Assistant", model="gpt4", cache=True)
All layers together¶
agent = Agent( name="SuperAgent", model="gpt4", acc=True, # Prevents context drift memory=True, # Long-term facts cache=True, # Cache tool outputs )
### ACC (Agentic Context Compression)
For long conversations (>10 turns), enable ACC to prevent memory drift:
```python
from cogent.memory.acc import AgentCognitiveCompressor
# Simple: Enable with defaults
agent = Agent(name="Assistant", model="gpt4", acc=True)
# Advanced: Custom ACC with specific bounds
acc = AgentCognitiveCompressor(max_constraints=5, max_entities=20)
agent = Agent(name="Assistant", model="gpt4", acc=acc)
See docs/acc.md for detailed ACC documentation.
Semantic Cache¶
For expensive tools, enable semantic caching to avoid redundant calls:
from cogent.memory import SemanticCache
from cogent.models import create_embedding
# Simple: Enable with defaults
agent = Agent(name="Assistant", model="gpt4", cache=True)
# Advanced: Custom SemanticCache instance
embed = create_embedding("openai", "text-embedding-3-small")
cache = SemanticCache(
embedding=embed,
similarity_threshold=0.90, # Stricter matching
max_entries=5000,
default_ttl=3600, # 1 hour
)
agent = Agent(name="Assistant", model="gpt4", cache=cache)
See docs/memory.md#semantic-cache for detailed cache documentation.
Resilience¶
Built-in fault tolerance with retries, circuit breakers, and fallbacks:
from cogent.agent import ResilienceConfig, RetryPolicy
agent = Agent(
name="Worker",
model=model,
resilience=ResilienceConfig(
retry_policy=RetryPolicy(
max_retries=3,
base_delay=1.0,
strategy="exponential",
),
),
)
Resilience Components¶
- RetryPolicy: Configure retry behavior with exponential/linear backoff
- CircuitBreaker: Prevent cascading failures
- FallbackRegistry: Define fallback behaviors for failures
Structured Output Self-Correction¶
When returns= is used, failed validation attempts are retried using conversation-based feedback. Rather than blindly re-sending the original prompt, the agent appends a correction turn that shows the model exactly what it produced and why it failed:
Human: <task + schema instruction> ← attempt 1
AI: {"type": "array", ...} ← bad output (model echoed schema)
Human: ⚠️ Your previous response failed validation.
Validation error: Expected list, got dict
Your response was:
```
{"type": "array", ...}
```
Please respond again with ONLY valid JSON that matches the required schema.
AI: ["python", "async", "fastapi"] ← self-corrected
This mirrors the Reflexion pattern — the model has full context of what it did wrong and can self-correct without needing a new conversation. Non-retryable errors (auth failures, exhausted rate-limit retries) propagate immediately and do not consume structured output retry budget.
The number of attempts is controlled by max_structured_output_retries on AgentConfig (default: 3).
Human-in-the-Loop (HITL)¶
Enable human oversight for sensitive operations:
agent = Agent(
name="Executor",
model=model,
tools=[delete_file, send_email],
interrupt_on={
"tools": ["delete_file", "send_email"], # Require approval
},
)
try:
result = await agent.run("Delete temp files")
except InterruptedException as e:
# Human reviews pending action
decision = HumanDecision(approved=True)
result = await agent.resume(e.state, decision)
Reasoning¶
Enable extended thinking for complex problems with AI-controlled reasoning rounds.
Basic Usage¶
from cogent import Agent
from cogent.agent.reasoning import ReasoningConfig
# Simple: Enable with defaults
agent = Agent(
name="Analyst",
model=model,
reasoning=True, # Default config
)
result = await agent.run("Analyze this complex problem...")
Custom Configuration¶
# Full control with ReasoningConfig
agent = Agent(
name="DeepThinker",
model=model,
reasoning=ReasoningConfig(
max_thinking_rounds=15, # AI decides when ready (up to 15)
style=ReasoningStyle.CRITICAL, # Critical reasoning style
show_thinking=True, # Include thoughts in output
),
)
Per-Call Overrides¶
Enable or customize reasoning for specific calls:
# Agent without reasoning by default
agent = Agent(name="Helper", model=model, reasoning=False)
# Simple task - no reasoning
result = await agent.run("What time is it?")
# Complex task - enable reasoning
result = await agent.run(
"Analyze this codebase architecture",
reasoning=True, # Enable for this call
)
# Very complex - custom config
result = await agent.run(
"Debug this complex issue",
reasoning=ReasoningConfig(
max_thinking_rounds=10,
style=ReasoningStyle.ANALYTICAL,
),
)
Reasoning Styles¶
ANALYTICAL: Step-by-step logical breakdown (default)EXPLORATORY: Consider multiple approachesCRITICAL: Question assumptions, find flawsCREATIVE: Generate novel solutions
AI-Controlled Rounds¶
The AI signals when reasoning is complete via <ready>true</ready> tags. The max_thinking_rounds is a safety limit, not a fixed count:
ReasoningConfig.standard() # max 10 rounds (safety net)
ReasoningConfig.deep() # max 15 rounds (complex problems)
Structured Output¶
Enforce response schemas with validation:
from pydantic import BaseModel, Field
from typing import Literal, Union
from enum import Enum
# Structured models
class ContactInfo(BaseModel):
name: str = Field(description="Full name")
email: str = Field(description="Email address")
phone: str | None = Field(None, description="Phone number")
# Per-call schema (recommended — most flexible)
agent = Agent(name="Extractor", model=model)
result = await agent.run(
"Extract: John Doe, john@acme.com",
returns=ContactInfo, # Schema for this call only
)
print(result.content.data) # ContactInfo(name="John Doe", ...)
# Bare types - return primitive values directly
result = await agent.run("Review this code", returns=Literal["PROCEED", "REVISE"])
print(result.content.data) # "PROCEED" (bare string, not wrapped)
# Collections - use bare types directly
result = await agent.run("Extract tags", returns=list[str])
print(result.content.data) # ["python", "async", "fastapi", ...]
result = await agent.run("Unique categories", returns=set[str])
print(result.content.data) # {"ai", "python", "llm"}
result = await agent.run("Player: Sarah, 25, 95.5", returns=tuple[str, int, float])
print(result.content.data) # ("Sarah", 25, 95.5)
# Union types - polymorphic responses
class Success(BaseModel):
status: Literal["success"] = "success"
result: str
class Error(BaseModel):
status: Literal["error"] = "error"
message: str
result = await agent.run("Handle request", returns=Union[Success, Error])
# Agent chooses which schema based on content
# Enum types
class Priority(str, Enum):
LOW = "low"
HIGH = "high"
result = await agent.run("Critical issue!", returns=Priority)
print(result.content.data) # Priority.HIGH
# Dynamic structure - agent decides fields
result = await agent.run("Analyze user feedback", returns=dict)
print(result.content.data) # {"sentiment": "positive", "score": 8, ...}
# None type - confirmations
result = await agent.run("Delete temp files", returns=type(None))
print(result.content.data) # None
# Other bare types: str, int, bool, float
result = await agent.run("How many items?", returns=int)
print(result.content.data) # 42 (bare int)
Supported schema types:
- Pydantic models - Full validation with BaseModel
- Dataclasses - Standard Python dataclasses
- TypedDict - Typed dictionaries
- Bare primitives - str, int, bool, float
- Bare Literal - Literal["A", "B", ...] for constrained choices
- Collections - list[T], set[T], tuple[T, ...] — bare types work without any wrapper
- Union types - Union[A, B] for polymorphic responses
- Enum types - class MyEnum(str, Enum) for type-safe choices
- None type - type(None) for confirmation responses
- dict - Agent-decided dynamic structure (any fields)
- JSON Schema - Raw JSON Schema dicts
TaskBoard¶
Human-like task tracking for complex workflows:
agent = Agent(
name="Researcher",
model=model,
tools=[search, summarize],
taskboard=True, # Adds task management tools
)
result = await agent.run("Research Python async patterns")
print(agent.taskboard.summary())
Streaming¶
Enable token-by-token streaming:
agent = Agent(
name="Writer",
model=model,
stream=True,
)
async for chunk in agent.run("Write a story", stream=True):
print(chunk.content, end="", flush=True)
Spawning¶
Dynamic agent creation at runtime:
from cogent.agent import SpawningConfig, AgentSpec
agent = Agent(
name="Coordinator",
model=model,
spawning=SpawningConfig(
allowed_specs=[
AgentSpec(name="researcher", tools=["search"]),
AgentSpec(name="writer", tools=["write"]),
],
),
)
# Agent can spawn sub-agents during execution
result = await agent.run("Research and write about AI")
Subagents (Native Delegation)¶
New in v0.x.x: Native subagent support with full metadata preservation.
Delegate tasks to specialist agents while preserving Response metadata (tokens, duration, delegation chain):
from cogent import Agent
# Create specialist agents
data_analyst = Agent(
name="data_analyst",
model="gpt-5.4-mini",
instructions="Analyze data and provide statistical insights.",
)
market_researcher = Agent(
name="market_researcher",
model="gpt-5.4-mini",
instructions="Research market trends and competitive landscape.",
)
# Create coordinator with subagents
coordinator = Agent(
name="coordinator",
model="gpt-5.4-mini",
instructions="""Coordinate research tasks:
- Use data_analyst for numerical analysis
- Use market_researcher for market trends
Synthesize their findings.""",
# Simply pass the agents - uses their names automatically
subagents=[data_analyst, market_researcher],
)
# Coordinator delegates automatically
response = await coordinator.run("Analyze Q4 2025 e-commerce growth")
# Full metadata preserved
print(f"Total tokens: {response.metadata.tokens.total_tokens}") # Includes all subagents
print(f"Subagent calls: {len(response.subagent_responses)}")
for sub_resp in response.subagent_responses:
print(f" {sub_resp.metadata.agent}: {sub_resp.metadata.tokens.total_tokens} tokens")
Structured Output from Subagents¶
Use returns= on a subagent to declare the schema it produces. The coordinator's LLM receives clean JSON instead of a plain string, enabling it to reason over the structured result:
from pydantic import BaseModel
from typing import Literal
class ReviewScore(BaseModel):
score: int
verdict: Literal["approved", "needs_revision"]
feedback: str
reviewer = Agent(
name="reviewer",
model="gpt-5.4-mini",
returns=ReviewScore, # Declares output schema when used as a subagent
instructions="Review content and score it 1-10.",
)
editor = Agent(
name="editor",
model="gpt-5.4-mini",
subagents=[writer, reviewer],
)
# reviewer.run() is called with returns=ReviewScore automatically;
# editor's LLM sees {"score": 8, "verdict": "approved", "feedback": "..."} ✅
result = await editor.run("Write and review a product tweet")
Key Benefits:
- ✅ Accurate token counting (coordinator + all subagents, including reasoning tokens)
- ✅ Full delegation chain tracking
- ✅ Context propagates automatically
- ✅ Observable with [subagent-call], [subagent-result] events
- ✅ Zero LLM behavior changes (uses existing tool calling)
- ✅ Subagents can declare their output schema via returns=
Example:
# List syntax (recommended) - uses agent names as tool names
coordinator = Agent(
name="coordinator",
model="gpt-5.4",
subagents=[specialist],
)
# Dict syntax - override tool names
coordinator = Agent(
name="coordinator",
model="gpt-5.4",
subagents={"custom_name": specialist},
)
See docs/subagents.md for complete documentation.
Observability¶
Built-in observability for standalone usage:
from cogent import Agent
from cogent.observability import ObservabilityLevel
# Boolean shorthand
agent = Agent(name="Worker", model=model, verbosity=True) # Progress level
# String levels
agent = Agent(name="Worker", model=model, verbosity="debug")
# Enum (explicit)
agent = Agent(name="Worker", model=model, verbosity=ObservabilityLevel.DEBUG)
# Integer (0-5)
agent = Agent(name="Worker", model=model, verbosity=4) # DEBUG
# Advanced: Full control with observer
from cogent.observability import Observer
observer = Observer(level="debug")
agent = Agent(name="Worker", model=model, observer=observer)
Verbosity levels:
| Level | Int | String | Description |
|---|---|---|---|
OFF |
0 | "off" |
No output |
RESULT |
1 | "result", "minimal" |
Only final results |
PROGRESS |
2 | "progress", "normal" |
Key milestones (default for True) |
DETAILED |
3 | "detailed", "verbose" |
Tool calls, timing |
DEBUG |
4 | "debug" |
Everything including internal events |
TRACE |
5 | "trace" |
Maximum detail + execution graph |
Priority: observer parameter takes precedence over verbosity.
API Reference¶
Agent Methods¶
| Method | Description |
|---|---|
run(task, context) |
Execute a task with optional context |
chat(message, thread_id) |
Chat with memory support |
think(prompt) |
Single reasoning step |
stream_chat(message) |
Streaming chat response |
resume(state, decision) |
Resume after HITL interrupt |
See docs/context.md for context patterns.
Model-Specific Configuration¶
Pass model-specific parameters via model_kwargs:
from cogent import Agent
# Gemini with thinking enabled
agent = Agent(
name="Thinker",
model="gemini-2.5-flash",
model_kwargs={"thinking_budget": 16384}, # Enable native thinking
)
# OpenAI with specific settings
agent = Agent(
name="Assistant",
model="gpt-5.4",
model_kwargs={"seed": 42, "logprobs": True},
)
# Any model-specific parameter
agent = Agent(
name="Custom",
model="anthropic:claude-sonnet-4",
model_kwargs={"top_k": 10},
)
Note: model_kwargs only applies when using string model names. Ignored when passing ChatModel instances (configure the instance directly instead).
AgentConfig Fields¶
| Field | Type | Description |
|---|---|---|
name |
str |
Agent name |
role |
AgentRole |
Agent role |
model |
str \| BaseChatModel |
Chat model (string or instance) |
model_kwargs |
dict \| None |
Model-specific parameters (for string models) |
tools |
list[str] |
Tool names |
system_prompt |
str |
System instructions |
resilience_config |
ResilienceConfig |
Fault tolerance |
interrupt_on |
dict |
HITL triggers |
stream |
bool |
Enable streaming |
returns |
type \| dict \| None |
Structured output schema when used as a subagent |
Exports¶
from cogent.agent import (
# Core
Agent,
AgentConfig,
AgentState,
# Memory
AgentMemory,
MemorySnapshot,
InMemorySaver,
ThreadConfig,
# Roles
RoleBehavior,
get_role_prompt,
get_role_behavior,
# Resilience
RetryStrategy,
RetryPolicy,
CircuitBreaker,
ResilienceConfig,
ToolResilience,
# HITL
InterruptReason,
HumanDecision,
InterruptedException,
# TaskBoard
TaskBoard,
TaskBoardConfig,
Task,
TaskStatus,
# Reasoning
ReasoningConfig,
ReasoningStyle,
ThinkingStep,
# Output
ResponseSchema,
StructuredResult,
# Spawning
AgentSpec,
SpawningConfig,
SpawnManager,
)