Skip to content

Commit

Permalink
documenting dependencies
Browse files Browse the repository at this point in the history
  • Loading branch information
samuelcolvin committed Nov 14, 2024
1 parent b958639 commit a8371b9
Show file tree
Hide file tree
Showing 5 changed files with 224 additions and 31 deletions.
221 changes: 193 additions & 28 deletions docs/concepts/dependencies.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,14 @@
from write_docs import result

# Dependencies

PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](system-prompt.md), [retrievers](retrievers.md) and [result validators](result-validation.md#TODO).

Dependencies provide a scalable, flexible, type safe and testable way to build agents.

Matching PydanticAI's design philosophy, our dependency system tries to use existing best practice in Python development rather than inventing esoteric "magic", this should make dependencies type-safe, understandable and ultimately easier to deploy in production.
Matching PydanticAI's design philosophy, our dependency system tries to use existing best practice in Python development rather than inventing esoteric "magic", this should make dependencies type-safe, understandable easier to test and ultimately easier to deploy in production.

## Defining Dependencies

Dependencies can be any python type, while in simple cases you might be able to pass a single object
Dependencies can be any python type. While in simple cases you might be able to pass a single object
as a dependency (e.g. an HTTP connection), [dataclasses][] are generally a convenient container when your dependencies included multiple objects.

!!! note "Asynchronous vs Synchronous dependencies"
System prompt functions, retriever functions and result validator functions which are not coroutines (e.g. `async def`)
are called with [`run_in_executor`][asyncio.loop.run_in_executor] in a thread pool, it's therefore marginally preferable
to use `async` methods where dependencies perform IO, although synchronous dependencies should work fine too.

Here's an example of defining an agent that requires dependencies.

(**Note:** dependencies aren't actually used in this example, see [Accessing Dependencies](#accessing-dependencies) below)
Expand All @@ -42,13 +33,14 @@ agent = Agent(
)


async with httpx.AsyncClient() as client:
deps = MyDeps('foobar', client)
result = await agent.run(
'Tell me a joke.',
deps=deps, # (3)!
)
print(result.data)
async def main():
async with httpx.AsyncClient() as client:
deps = MyDeps('foobar', client)
result = await agent.run(
'Tell me a joke.',
deps=deps, # (3)!
)
print(result.data)
```

1. Define a dataclass to hold dependencies.
Expand All @@ -62,7 +54,7 @@ _(This example is complete, it can be run "as is" inside an async context)_
Dependencies are accessed through the [`CallContext`][pydantic_ai.dependencies.CallContext] type, this should be the first parameter of system prompt functions etc.


```python title="System prompt with dependencies" hl_lines="20-27"
```python title="system_prompt_dependencies.py" hl_lines="20-27"
from dataclasses import dataclass

import httpx
Expand Down Expand Up @@ -92,10 +84,11 @@ async def get_system_prompt(ctx: CallContext[MyDeps]) -> str: # (2)!
return f'Prompt: {response.text}'


async with httpx.AsyncClient() as client:
deps = MyDeps('foobar', client)
result = await agent.run('Tell me a joke.', deps=deps)
print(result.data)
async def main():
async with httpx.AsyncClient() as client:
deps = MyDeps('foobar', client)
result = await agent.run('Tell me a joke.', deps=deps)
print(result.data)
```

1. [`CallContext`][pydantic_ai.dependencies.CallContext] may optionally be passed to a [`system_prompt`][pydantic_ai.Agent.system_prompt] function as the only argument.
Expand All @@ -105,11 +98,68 @@ async with httpx.AsyncClient() as client:

_(This example is complete, it can be run "as is" inside an async context)_

### Asynchronous vs. Synchronous dependencies

System prompt functions, retriever functions and result validator are all run in the async context of an agent run.

If these functions are not coroutines (e.g. `async def`) they are called with
[`run_in_executor`][asyncio.loop.run_in_executor] in a thread pool, it's therefore marginally preferable
to use `async` methods where dependencies perform IO, although synchronous dependencies should work fine too.

!!! note "`run` vs. `run_sync` and Asynchronous vs. Synchronous dependencies"
Whether you use synchronous or asynchronous dependencies, is completely independent of whether you use `run` or `run_sync``run_sync` is just a wrapper around `run` and agents are always run in an async context.

Here's the same example as above, but with a synchronous dependency:

```python title="sync_dependencies.py"
from dataclasses import dataclass

import httpx

from pydantic_ai import Agent, CallContext


@dataclass
class MyDeps:
api_key: str
http_client: httpx.Client # (1)!


agent = Agent(
'openai:gpt-4o',
deps_type=MyDeps,
)


@agent.system_prompt
def get_system_prompt(ctx: CallContext[MyDeps]) -> str: # (2)!
response = ctx.deps.http_client.get(
'https://example.com',
headers={'Authorization': f'Bearer {ctx.deps.api_key}'}
)
response.raise_for_status()
return f'Prompt: {response.text}'


async def main():
deps = MyDeps('foobar', httpx.Client())
result = await agent.run(
'Tell me a joke.',
deps=deps,
)
print(result.data)
```

1. Here we use a synchronous `httpx.Client` instead of an asynchronous `httpx.AsyncClient`.
2. To match the synchronous dependency, the system prompt function is now a plain function, not a coroutine.

_(This example is complete, it can be run "as is")_

## Full Example

As well as system prompts, dependencies can be used in [retrievers](retrievers.md) and [result validators](result-validation.md#TODO).

```python title="full_example.py" hl_lines="27-34 38-48"
```python title="full_example.py" hl_lines="27-35 38-48"
from dataclasses import dataclass

import httpx
Expand Down Expand Up @@ -160,15 +210,130 @@ async def validate_result(ctx: CallContext[MyDeps], final_response: str) -> str:
return final_response


async with httpx.AsyncClient() as client:
deps = MyDeps('foobar', client)
result = await agent.run('Tell me a joke.', deps=deps)
print(result.data)
async def main():
async with httpx.AsyncClient() as client:
deps = MyDeps('foobar', client)
result = await agent.run('Tell me a joke.', deps=deps)
print(result.data)
```

1. To pass `CallContext` and to a retriever, us the [`retriever_context`][pydantic_ai.Agent.retriever_context] decorator.
2. `CallContext` may optionally be passed to a [`result_validator`][pydantic_ai.Agent.result_validator] function as the first argument.

## Overriding Dependencies

When testing agents, it's useful to be able to customise dependencies.

While this can sometimes be done by calling the agent directly within unit tests, we can also override dependencies
while calling application code which in turn calls the agent.

This is done via the [`override_deps`][pydantic_ai.Agent.override_deps] method on the agent.

```py title="joke_app.py"
from dataclasses import dataclass

import httpx

from pydantic_ai import Agent, CallContext


@dataclass
class MyDeps:
api_key: str
http_client: httpx.AsyncClient

async def system_prompt_factory(self) -> str: # (1)!
response = await self.http_client.get('https://example.com')
response.raise_for_status()
return f'Prompt: {response.text}'


joke_agent = Agent('openai:gpt-4o', deps_type=MyDeps)


@joke_agent.system_prompt
async def get_system_prompt(ctx: CallContext[MyDeps]) -> str:
return await ctx.deps.system_prompt_factory() # (2)!


async def application_code(prompt: str) -> str: # (3)!
...
...
# now deep within application code we call our agent
async with httpx.AsyncClient() as client:
app_deps = MyDeps('foobar', client)
result = await joke_agent.run(prompt, deps=app_deps) # (4)!
return result.data

```

1. Define a method on the dependency to make the system prompt easier to customise.
2. Call the system prompt factory from within the system prompt function.
3. Application code that calls the agent, in a real application this might be an API endpoint.
4. Call the agent from within the application code, in a real application this call might be deep within a call stack. Note `app_deps` here will NOT be used when deps are overridden.

```py title="test_joke_app.py" hl_lines="10-12"
from joke_app import application_code, joke_agent, MyDeps


class TestMyDeps(MyDeps): # (1)!
async def system_prompt_factory(self) -> str:
return 'test prompt'


async def test_application_code():
test_deps = TestMyDeps('test_key', None) # (2)!
with joke_agent.override_deps(test_deps): # (3)!
joke = application_code('Tell me a joke.') # (4)!
assert joke == 'funny'
```

1. Define a subclass of `MyDeps` in tests to customise the system prompt factory.
2. Create an instance of the test dependency, we don't need to pass an `http_client` here as it's not used.
3. Override the dependencies of the agent for the duration of the `with` block, `test_deps` will be used when the agent is run.
4. Now we can safely call our application code, the agent will use the overridden dependencies.

## Agents as dependencies of other Agents

Since dependencies can be any python type, and agents are just python objects, agents can be dependencies of other agents.

```py title="agents_as_dependencies.py"
from dataclasses import dataclass

from pydantic_ai import Agent, CallContext


@dataclass
class MyDeps:
factory_agent: Agent[None, list[str]]


joke_agent = Agent(
'openai:gpt-4o',
deps_type=MyDeps,
system_prompt=(
'Use the "joke_factory" to generate some jokes, then choose the best. '
'You must return just a single joke.'
)
)

factory_agent = Agent('gemini-1.5-pro', result_type=list[str])


@joke_agent.retriever_context
async def joke_factory(ctx: CallContext[MyDeps], count: int) -> str:
r = await ctx.deps.factory_agent.run(f'Please generate {count} jokes.')
return '\n'.join(r.data)


result = joke_agent.run_sync('Tell me a joke.', deps=MyDeps(factory_agent))
print(result.data)
```

## Examples

The following examples demonstrate how to use dependencies in PydanticAI:

- [Weather Agent](../examples/weather-agent.md)
- [SQL Generation](../examples/sql-gen.md)
- [RAG](../examples/rag.md)
28 changes: 28 additions & 0 deletions docs/concepts/message-history.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from pydantic_ai_examples.pydantic_model import model

# Messages and chat history

PydanticAI provides access to messages exchanged during an agent run. These messages can be used both to continue a coherent conversation, and to understand how an agent performed.
Expand Down Expand Up @@ -98,4 +100,30 @@ print(result2.all_messages())
```
_(This example is complete, it can be run "as is")_

## Other ways of using messages

Since messages are defined by simple dataclasses, you can manually create and manipulate, e.g. for testing.

The message format is independent of the model used, so you can use messages in different agents, or the same agent with different models.

```python
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result1 = agent.run_sync('Tell me a joke.')
print(result1.data)

result2 = agent.run_sync('Explain?', model='gemini-1.5-pro', message_history=result1.new_messages())
print(result2.data)

print(result2.all_messages())
```

## Last Run Messages

TODO: document [`last_run_messages`][pydantic_ai.Agent.last_run_messages].

## Examples

For a more complete example of using messages in conversations, see the [chat app](../examples/chat-app.md) example.
2 changes: 1 addition & 1 deletion docs/examples/rag.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ RAG search example. This demo allows you to ask question of the [logfire](https:
Demonstrates:

* retrievers
* agent dependencies
* [agent dependencies](../concepts/dependencies.md)
* RAG search

This is done by creating a database containing each section of the markdown documentation, then registering
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/sql-gen.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Demonstrates:
* custom `result_type`
* dynamic system prompt
* result validation
* agent dependencies
* [agent dependencies](../concepts/dependencies.md)

## Running the Example

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/weather-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Demonstrates:

* retrievers
* multiple retrievers
* agent dependencies
* [agent dependencies](../concepts/dependencies.md)

In this case the idea is a "weather" agent — the user can ask for the weather in multiple locations,
the agent will use the `get_lat_lng` tool to get the latitude and longitude of the locations, then use
Expand Down

0 comments on commit a8371b9

Please sign in to comment.