ant_ai.core.result
TransitionAction
Bases: StrEnum
Signal returned by a step telling the executor how to proceed.
Source code in src/ant_ai/core/result.py
11 12 13 14 15 | |
Transition
pydantic-model
Bases: BaseModel
Routing instruction attached to every StepResult.
LLMStep emits CONTINUE, next_step="tool" when the model requested
tool calls, or END when it produced a final text answer. ToolStep
emits CONTINUE, next_step="llm" after executing tools, or END when
a tool signalled that human clarification is needed.
Show JSON schema:
{
"$defs": {
"TransitionAction": {
"description": "Signal returned by a step telling the executor how to proceed.",
"enum": [
"continue",
"end"
],
"title": "TransitionAction",
"type": "string"
}
},
"description": "Routing instruction attached to every `StepResult`.\n\n`LLMStep` emits `CONTINUE, next_step=\"tool\"` when the model requested\ntool calls, or `END` when it produced a final text answer. `ToolStep`\nemits `CONTINUE, next_step=\"llm\"` after executing tools, or `END` when\na tool signalled that human clarification is needed.",
"properties": {
"action": {
"$ref": "#/$defs/TransitionAction",
"default": "continue",
"description": "Route execution to next_step on CONTINUE, or exit the loop on END."
},
"next_step": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Name of the registered step to run next. Only used when action is CONTINUE.",
"title": "Next Step"
}
},
"title": "Transition",
"type": "object"
}
Config:
frozen:True
Fields:
-
action(TransitionAction) -
next_step(str | None)
Source code in src/ant_ai/core/result.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | |
action
pydantic-field
action: TransitionAction = CONTINUE
Route execution to next_step on CONTINUE, or exit the loop on END.
next_step
pydantic-field
next_step: str | None = None
Name of the registered step to run next. Only used when action is CONTINUE.
LLMOutput
pydantic-model
Bases: BaseModel
Output produced by a single model call inside LLMStep.
Show JSON schema:
{
"$defs": {
"ToolCall": {
"description": "Single tool call object inside assistant.tool_calls (OpenAI schema).",
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"type": {
"default": "function",
"title": "Type",
"type": "string"
},
"function": {
"$ref": "#/$defs/ToolFunction"
}
},
"required": [
"id",
"function"
],
"title": "ToolCall",
"type": "object"
},
"ToolFunction": {
"description": "Inner function payload for a tool call (OpenAI schema).",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"arguments": {
"title": "Arguments",
"type": "string"
}
},
"required": [
"name",
"arguments"
],
"title": "ToolFunction",
"type": "object"
}
},
"description": "Output produced by a single model call inside `LLMStep`.",
"properties": {
"kind": {
"const": "llm",
"default": "llm",
"title": "Kind",
"type": "string"
},
"raw": {
"description": "Raw text or JSON string as returned by the model.",
"title": "Raw",
"type": "string"
},
"tool_calls": {
"default": [],
"description": "Tool calls requested by the model. Empty when the model produced a final text answer.",
"items": {
"$ref": "#/$defs/ToolCall"
},
"title": "Tool Calls",
"type": "array"
}
},
"required": [
"raw"
],
"title": "LLMOutput",
"type": "object"
}
Config:
frozen:True
Fields:
-
kind(Literal['llm']) -
raw(str) -
tool_calls(tuple[ToolCall, ...])
Source code in src/ant_ai/core/result.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | |
raw
pydantic-field
raw: str
Raw text or JSON string as returned by the model.
tool_calls
pydantic-field
tool_calls: tuple[ToolCall, ...] = ()
Tool calls requested by the model. Empty when the model produced a final text answer.
ToolOutput
pydantic-model
Bases: BaseModel
Output produced by ToolStep after executing one or more tool calls.
Show JSON schema:
{
"description": "Output produced by `ToolStep` after executing one or more tool calls.",
"properties": {
"kind": {
"const": "tool",
"default": "tool",
"title": "Kind",
"type": "string"
},
"results": {
"default": [],
"description": "Serialized tool results, each with tool_call_id, name, and content keys.",
"items": {
"additionalProperties": true,
"type": "object"
},
"title": "Results",
"type": "array"
}
},
"title": "ToolOutput",
"type": "object"
}
Config:
frozen:True
Fields:
-
kind(Literal['tool']) -
results(tuple[dict[str, Any], ...])
Source code in src/ant_ai/core/result.py
59 60 61 62 63 64 65 66 67 68 69 | |
results
pydantic-field
results: tuple[dict[str, Any], ...] = ()
Serialized tool results, each with tool_call_id, name, and content keys.
ClarificationNeededOutput
pydantic-model
Bases: BaseModel
Signals that a tool needs human input before execution can continue.
Raised inside ToolStep when a tool returns a clarification request
(e.g. via HumanInputNeededTool.ask()). The react loop returns this to
its caller immediately, pausing the agent until the user answers.
Show JSON schema:
{
"description": "Signals that a tool needs human input before execution can continue.\n\nRaised inside `ToolStep` when a tool returns a clarification request\n(e.g. via `HumanInputNeededTool.ask()`). The react loop returns this to\nits caller immediately, pausing the agent until the user answers.",
"properties": {
"kind": {
"const": "human",
"default": "human",
"title": "Kind",
"type": "string"
},
"question": {
"description": "The question to present to the user.",
"title": "Question",
"type": "string"
},
"tool_call_id": {
"default": "",
"description": "ID of the tool call that triggered the clarification request.",
"title": "Tool Call Id",
"type": "string"
},
"tool_name": {
"default": "",
"description": "Name of the tool that triggered the clarification request.",
"title": "Tool Name",
"type": "string"
}
},
"required": [
"question"
],
"title": "ClarificationNeededOutput",
"type": "object"
}
Config:
frozen:True
Fields:
-
kind(Literal['human']) -
question(str) -
tool_call_id(str) -
tool_name(str)
Source code in src/ant_ai/core/result.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | |
question
pydantic-field
question: str
The question to present to the user.
tool_call_id
pydantic-field
tool_call_id: str = ''
ID of the tool call that triggered the clarification request.
tool_name
pydantic-field
tool_name: str = ''
Name of the tool that triggered the clarification request.
StepResult
pydantic-model
Bases: BaseModel
The immutable result of running a single Step.
No references to State or any mutable object. State for the next
iteration is passed into the executor, never stored here.
Show JSON schema:
{
"$defs": {
"ClarificationNeededOutput": {
"description": "Signals that a tool needs human input before execution can continue.\n\nRaised inside `ToolStep` when a tool returns a clarification request\n(e.g. via `HumanInputNeededTool.ask()`). The react loop returns this to\nits caller immediately, pausing the agent until the user answers.",
"properties": {
"kind": {
"const": "human",
"default": "human",
"title": "Kind",
"type": "string"
},
"question": {
"description": "The question to present to the user.",
"title": "Question",
"type": "string"
},
"tool_call_id": {
"default": "",
"description": "ID of the tool call that triggered the clarification request.",
"title": "Tool Call Id",
"type": "string"
},
"tool_name": {
"default": "",
"description": "Name of the tool that triggered the clarification request.",
"title": "Tool Name",
"type": "string"
}
},
"required": [
"question"
],
"title": "ClarificationNeededOutput",
"type": "object"
},
"LLMOutput": {
"description": "Output produced by a single model call inside `LLMStep`.",
"properties": {
"kind": {
"const": "llm",
"default": "llm",
"title": "Kind",
"type": "string"
},
"raw": {
"description": "Raw text or JSON string as returned by the model.",
"title": "Raw",
"type": "string"
},
"tool_calls": {
"default": [],
"description": "Tool calls requested by the model. Empty when the model produced a final text answer.",
"items": {
"$ref": "#/$defs/ToolCall"
},
"title": "Tool Calls",
"type": "array"
}
},
"required": [
"raw"
],
"title": "LLMOutput",
"type": "object"
},
"StepOutput": {
"discriminator": {
"mapping": {
"human": "#/$defs/ClarificationNeededOutput",
"llm": "#/$defs/LLMOutput",
"tool": "#/$defs/ToolOutput"
},
"propertyName": "kind"
},
"oneOf": [
{
"$ref": "#/$defs/LLMOutput"
},
{
"$ref": "#/$defs/ToolOutput"
},
{
"$ref": "#/$defs/ClarificationNeededOutput"
}
]
},
"ToolCall": {
"description": "Single tool call object inside assistant.tool_calls (OpenAI schema).",
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"type": {
"default": "function",
"title": "Type",
"type": "string"
},
"function": {
"$ref": "#/$defs/ToolFunction"
}
},
"required": [
"id",
"function"
],
"title": "ToolCall",
"type": "object"
},
"ToolFunction": {
"description": "Inner function payload for a tool call (OpenAI schema).",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"arguments": {
"title": "Arguments",
"type": "string"
}
},
"required": [
"name",
"arguments"
],
"title": "ToolFunction",
"type": "object"
},
"ToolOutput": {
"description": "Output produced by `ToolStep` after executing one or more tool calls.",
"properties": {
"kind": {
"const": "tool",
"default": "tool",
"title": "Kind",
"type": "string"
},
"results": {
"default": [],
"description": "Serialized tool results, each with tool_call_id, name, and content keys.",
"items": {
"additionalProperties": true,
"type": "object"
},
"title": "Results",
"type": "array"
}
},
"title": "ToolOutput",
"type": "object"
},
"Transition": {
"description": "Routing instruction attached to every `StepResult`.\n\n`LLMStep` emits `CONTINUE, next_step=\"tool\"` when the model requested\ntool calls, or `END` when it produced a final text answer. `ToolStep`\nemits `CONTINUE, next_step=\"llm\"` after executing tools, or `END` when\na tool signalled that human clarification is needed.",
"properties": {
"action": {
"$ref": "#/$defs/TransitionAction",
"default": "continue",
"description": "Route execution to next_step on CONTINUE, or exit the loop on END."
},
"next_step": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Name of the registered step to run next. Only used when action is CONTINUE.",
"title": "Next Step"
}
},
"title": "Transition",
"type": "object"
},
"TransitionAction": {
"description": "Signal returned by a step telling the executor how to proceed.",
"enum": [
"continue",
"end"
],
"title": "TransitionAction",
"type": "string"
}
},
"description": "The immutable result of running a single `Step`.\n\nNo references to `State` or any mutable object. State for the next\niteration is passed into the executor, never stored here.",
"properties": {
"output": {
"$ref": "#/$defs/StepOutput",
"description": "What the step produced. Use isinstance against LLMOutput, ToolOutput, or ClarificationNeededOutput before accessing subtype-specific fields."
},
"transition": {
"$ref": "#/$defs/Transition",
"description": "Where to go next. The loop exits on END, or runs transition.next_step on CONTINUE."
}
},
"required": [
"output"
],
"title": "StepResult",
"type": "object"
}
Config:
frozen:True
Fields:
-
output(StepOutput) -
transition(Transition)
Source code in src/ant_ai/core/result.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | |
output
pydantic-field
output: StepOutput
What the step produced. Use isinstance against LLMOutput, ToolOutput, or ClarificationNeededOutput before accessing subtype-specific fields.
transition
pydantic-field
transition: Transition
Where to go next. The loop exits on END, or runs transition.next_step on CONTINUE.