Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GraphRun object to make use of next more ergonomic #833

Merged
merged 31 commits into from
Feb 20, 2025
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
57905c0
Add GraphRun class
dmontagu Feb 11, 2025
1a09378
Minor improvements
dmontagu Feb 11, 2025
0c3b48d
Move definition of MarkFinalResult to result module
dmontagu Feb 11, 2025
9280adf
A bit more clean-up
dmontagu Feb 11, 2025
7d55afd
Update call_id logic
dmontagu Feb 11, 2025
63df8ba
Minor fixes
dmontagu Feb 11, 2025
6c65095
Update some things
dmontagu Feb 11, 2025
db56e31
Update some comments etc.
dmontagu Feb 12, 2025
9af98e8
Undo kind changes
dmontagu Feb 12, 2025
2100a1a
Merge branch 'main' into dmontagu/graph-run-object
dmontagu Feb 12, 2025
78e85d6
Introduce auxiliary types
dmontagu Feb 12, 2025
e0c716b
Merge main
dmontagu Feb 17, 2025
ef8895a
Address some feedback
dmontagu Feb 18, 2025
13e3b86
result -> node
dmontagu Feb 18, 2025
a08aafa
Rename MarkFinalResult to FinalResult
dmontagu Feb 18, 2025
ff6f699
Remove GraphRunner/AgentRunner and add .iter() API
dmontagu Feb 18, 2025
41bb069
Make result private
dmontagu Feb 18, 2025
b565088
Reduce diff to main and add some docstrings
dmontagu Feb 18, 2025
8d2c74e
Add more docstrings
dmontagu Feb 18, 2025
4bb67a5
Add more docs
dmontagu Feb 18, 2025
a6e6445
Fix various docs references
dmontagu Feb 18, 2025
007d8ca
Fix final docs references
dmontagu Feb 18, 2025
6d532c1
Address some feedback
dmontagu Feb 18, 2025
0745ba9
Update docs
dmontagu Feb 18, 2025
8d86b3a
Fix docs build
dmontagu Feb 18, 2025
bdb5f77
Make the graph_run_result private on AgentRunResult
dmontagu Feb 18, 2025
0d36dbf
Some minor cleanup of reprs
dmontagu Feb 18, 2025
aa8b36a
Merge branch 'main' into dmontagu/graph-run-object
dmontagu Feb 19, 2025
9a676d2
Tweak some APIs
dmontagu Feb 19, 2025
e799024
Rename final_result to result and drop DepsT in some places
dmontagu Feb 20, 2025
c7ab89f
More cleanup
dmontagu Feb 20, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ testcov: test ## Run tests and generate a coverage report

.PHONY: update-examples
update-examples: ## Update documentation examples
uv run -m pytest --update-examples
uv run -m pytest --update-examples tests/test_examples.py

# `--no-strict` so you can build the docs without insiders packages
.PHONY: docs
Expand Down
132 changes: 126 additions & 6 deletions docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,14 @@ print(result.data)

## Running Agents

There are three ways to run an agent:
There are four ways to run an agent:

1. [`agent.run()`][pydantic_ai.Agent.run] — a coroutine which returns a [`RunResult`][pydantic_ai.result.RunResult] containing a completed response
2. [`agent.run_sync()`][pydantic_ai.Agent.run_sync] — a plain, synchronous function which returns a [`RunResult`][pydantic_ai.result.RunResult] containing a completed response (internally, this just calls `loop.run_until_complete(self.run())`)
3. [`agent.run_stream()`][pydantic_ai.Agent.run_stream] — a coroutine which returns a [`StreamedRunResult`][pydantic_ai.result.StreamedRunResult], which contains methods to stream a response as an async iterable
1. [`agent.run()`][pydantic_ai.Agent.run] — a coroutine which returns a [`RunResult`][pydantic_ai.agent.AgentRunResult] containing a completed response.
2. [`agent.run_sync()`][pydantic_ai.Agent.run_sync] — a plain, synchronous function which returns a [`RunResult`][pydantic_ai.agent.AgentRunResult] containing a completed response (internally, this just calls `loop.run_until_complete(self.run())`).
3. [`agent.run_stream()`][pydantic_ai.Agent.run_stream] — a coroutine which returns a [`StreamedRunResult`][pydantic_ai.result.StreamedRunResult], which contains methods to stream a response as an async iterable.
4. [`agent.iter()`][pydantic_ai.Agent.iter] — a context manager which returns an [`AgentRun`][pydantic_ai.agent.AgentRun], an async-iterable over the nodes of the agent's graph.

Here's a simple example demonstrating all three:
Here's a simple example demonstrating the first three:

```python {title="run_agent.py"}
from pydantic_ai import Agent
Expand All @@ -93,6 +94,125 @@ _(This example is complete, it can be run "as is" — you'll need to add `asynci

You can also pass messages from previous runs to continue a conversation or provide context, as described in [Messages and Chat History](message-history.md).

---

### Iterating Over an Agent's Graph

In more advanced scenarios, you may want to inspect or manipulate the agent's workflow as it runs. For example, you may want to collect data at each step of the run or manually decide how to proceed based on the node returned. In these situations, you can use the [`Agent.iter`][pydantic_ai.Agent.iter] method, a context manager which returns an [`AgentRun`][pydantic_ai.agent.AgentRun].

#### `async for` iteration

Here's an example of using `async for` with `iter` to record each node the agent executes:

```python
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o')


async def main():
nodes = []
# Begin an AgentRun, which is an async-iterable over the nodes of the agent's graph
with agent.iter('What is the capital of France?') as agent_run:
async for node in agent_run:
# Each node represents a step in the agent's execution
nodes.append(node)
print(nodes)
"""
[
ModelRequestNode(
request=ModelRequest(
parts=[
UserPromptPart(
content='What is the capital of France?',
timestamp=datetime.datetime(...),
part_kind='user-prompt',
)
],
kind='request',
)
),
HandleResponseNode(
model_response=ModelResponse(
parts=[TextPart(content='Paris', part_kind='text')],
model_name='function:model_logic',
timestamp=datetime.datetime(...),
kind='response',
)
),
End(data=FinalResult(data='Paris', tool_name=None)),
]
"""
print(agent_run.final_result.data)
#> Paris
```

- The `AgentRun` is an async iterator that yields each node (`BaseNode` or `End`) in the flow.
- The run ends when an `End` node is returned.

#### Using `.next(...)` manually

You can also drive the iteration manually by passing the node you want to run next to the `AgentRun.next(...)` method. This allows you to inspect or modify the node before it executes or skip nodes based on your own logic:

```python
from pydantic_ai import Agent
from pydantic_graph import End

agent = Agent('openai:gpt-4o')


async def main():
with agent.iter('What is the capital of France?') as agent_run:
# You can get the first node by calling __anext__ once
node = await agent_run.__anext__()

# Keep track of nodes here
all_nodes = [node]

# Drive the iteration manually
while not isinstance(node, End):
# You could inspect or mutate the node here as needed
node = await agent_run.next(node)
all_nodes.append(node)

print(all_nodes)
"""
[
ModelRequestNode(
request=ModelRequest(
parts=[
UserPromptPart(
content='What is the capital of France?',
timestamp=datetime.datetime(...),
part_kind='user-prompt',
)
],
kind='request',
)
),
HandleResponseNode(
model_response=ModelResponse(
parts=[TextPart(content='Paris', part_kind='text')],
model_name='function:model_logic',
timestamp=datetime.datetime(...),
kind='response',
)
),
End(data=FinalResult(data='Paris', tool_name=None)),
]
"""
```

- When you call `await agent_run.next(node)`, it executes that node in the agent's graph, updates the run's history, and returns the *next* node to run.
- The agent run is finished once an `End` node has been produced; instances of `End` cannot be passed to `next`.

#### Accessing usage and the final result

You can retrieve usage statistics (tokens, requests, etc.) at any time from the [`AgentRun`][pydantic_ai.agent.AgentRun] object via `agent_run.usage()`. This method returns a [`Usage`][pydantic_ai.usage.Usage] object containing the usage data.

Once the run finishes, `agent_run.final_result` becomes a [`AgentRunResult`][pydantic_ai.agent.AgentRunResult] object containing the final output (and related metadata).

---

### Additional Configuration

Expand Down Expand Up @@ -177,7 +297,7 @@ except UsageLimitExceeded as e:
2. This run will error after 3 requests, preventing the infinite tool calling.

!!! note
This is especially relevant if you're registered a lot of tools, `request_limit` can be used to prevent the model from choosing to make too many of these calls.
This is especially relevant if you've registered many tools. The `request_limit` can be used to prevent the model from calling them in a loop too many times.

#### Model (Run) Settings

Expand Down
2 changes: 2 additions & 0 deletions docs/api/agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
options:
members:
- Agent
- AgentRun
- AgentRunResult
- EndStrategy
- RunResultData
- capture_run_messages
4 changes: 3 additions & 1 deletion docs/api/result.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,6 @@

::: pydantic_ai.result
options:
inherited_members: true
inherited_members: true
members:
- StreamedRunResult
Loading
Loading