Skip to content

Commit

Permalink
New intro (#64)
Browse files Browse the repository at this point in the history
  • Loading branch information
samuelcolvin authored Nov 19, 2024
1 parent cd47b0f commit be83898
Show file tree
Hide file tree
Showing 13 changed files with 303 additions and 268 deletions.
23 changes: 23 additions & 0 deletions docs/examples/bank-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
Small but complete example of using PydanticAI to build a support agent for a bank.

Demonstrates:

* [dynamic system prompt](../agents.md#system-prompts)
* [structured `result_type`](../results.md#structured-result-validation)
* [retrievers](../agents.md#retrievers)

## Running the Example

With [dependencies installed and environment variables set](./index.md#usage), run:

```bash
python/uv-run -m pydantic_ai_examples.bank_support
```

(or `PYDANTIC_AI_MODEL=gemini-1.5-flash ...`)

## Example Code

```py title="bank_support.py"
#! pydantic_ai_examples/bank_support.py
```
4 changes: 2 additions & 2 deletions docs/examples/chat-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ Simple chat app example build with FastAPI.
Demonstrates:

* [reusing chat history](../message-history.md)
* serializing messages
* streaming responses
* [serializing messages](../message-history.md#accessing-messages-from-results)
* [streaming responses](../results.md#streamed-results)

This demonstrates storing chat history between requests and using it to give the model context for new responses.

Expand Down
12 changes: 11 additions & 1 deletion docs/examples/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,17 @@ For examples, to run the very simple [`pydantic_model`](./pydantic-model.md) exa
python/uv-run -m pydantic_ai_examples.pydantic_model
```

But you'll probably want to edit examples in addition to just running them. You can copy the examples to a new directory with:
If you like on-liners and you're using uv, you can run a pydantic-ai example with zero setup:

```bash
OPENAI_API_KEY='your-api-key' \
uv run --with 'pydantic-ai[examples]' \
-m pydantic_ai_examples.pydantic_model
```

---

You'll probably want to edit examples in addition to just running them. You can copy the examples to a new directory with:

```bash
python/uv-run -m pydantic_ai_examples --copy-to examples/
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/pydantic-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Simple example of using Pydantic AI to construct a Pydantic model from a text in

Demonstrates:

* custom `result_type`
* [structured `result_type`](../results.md#structured-result-validation)

## Running the Example

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/rag.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ RAG search example. This demo allows you to ask question of the [logfire](https:

Demonstrates:

* retrievers
* [retrievers](../agents.md#retrievers)
* [agent dependencies](../dependencies.md)
* RAG search

Expand Down
6 changes: 3 additions & 3 deletions docs/examples/sql-gen.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Example demonstrating how to use Pydantic AI to generate SQL queries based on us

Demonstrates:

* custom `result_type`
* dynamic system prompt
* result validation
* [dynamic system prompt](../agents.md#system-prompts)
* [structured `result_type`](../results.md#structured-result-validation)
* [result validation](../results.md#result-validators-functions)
* [agent dependencies](../dependencies.md)

## Running the Example
Expand Down
3 changes: 1 addition & 2 deletions docs/examples/weather-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@ Example of Pydantic AI with multiple tools which the LLM needs to call in turn t

Demonstrates:

* retrievers
* multiple retrievers
* [retrievers](../agents.md#retrievers)
* [agent dependencies](../dependencies.md)

In this case the idea is a "weather" agent — the user can ask for the weather in multiple locations,
Expand Down
165 changes: 111 additions & 54 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,135 @@
# Introduction {.hide}

--8<-- "docs/.partials/index-header.html"

# PydanticAI {.hide}
When I first found FastAPI, I got it immediately, I was excited to find something so genuinely innovative and yet ergonomic built on Pydantic.

Virtually every Agent Framework and LLM library in Python uses Pydantic, but when we came to use Gen AI in [Pydantic Logfire](https://pydantic.dev/logfire), I couldn't find anything that gave me the same feeling.

PydanticAI is a Python Agent Framework designed to make it less painful to build production grade applications with Generative AI.

You can think of PydanticAI as an Agent Framework or a shim to use Pydantic with LLMs — they're the same thing.
## Why use PydanticAI

PydanticAI tries to make working with LLMs feel similar to building a web application.
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, Langchain, LlamaIndex, AutoGPT, Transformers, Instructor and many more)
* Multi-model — currently with OpenAI and Gemini are support, Anthropic [coming soon](https://github.com/pydantic/pydantic-ai/issues/63), simply interface to implement other models or adapt existing ones
* Type-safe
* Built on tried and tested best practices in Python
* Structured response validation with Pydantic
* Streamed responses, including validation of streamed structured responses with Pydantic
* Novel, type-safe dependency injection system
* Logfire integration

!!! example "In Beta"
PydanticAI is in early beta, the API is subject to change and there's a lot more to do.
[Feedback](https://github.com/pydantic/pydantic-ai/issues) is very welcome!

## Example — Hello World

Here's a very minimal example of PydanticAI.

```py title="hello_world.py"
from pydantic_ai import Agent

agent = Agent('gemini-1.5-flash', system_prompt='Be concise, reply with one sentence.')

result = agent.run_sync('Where does "hello world" come from?')
print(result.data)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
"""
```
_(This example is complete, it can be run "as is")_

Not very interesting yet, but we can easily add retrievers, dynamic system prompts and structured responses to build more powerful agents.

## Example — Retrievers and Dependency Injection

Partial example of using retrievers to help an LLM respond to a user's query about the weather:
Small but complete example of using PydanticAI to build a support agent for a bank.

```py title="bank_support.py"
from dataclasses import dataclass

```py title="weather_agent.py"
import httpx
from pydantic import BaseModel, Field

from pydantic_ai import Agent, CallContext

weather_agent = Agent( # (1)!
from bank_database import DatabaseConn


@dataclass
class SupportDependencies: # (3)!
customer_id: int
db: DatabaseConn


class SupportResult(BaseModel):
support_advice: str = Field(description='Advice returned to the customer')
block_card: bool = Field(description='Whether to block their')
risk: int = Field(description='Risk level of query', ge=0, le=10)


support_agent = Agent( # (1)!
'openai:gpt-4o', # (2)!
deps_type=httpx.AsyncClient, # (3)!
system_prompt='Be concise, reply with one sentence.', # (4)!
deps_type=SupportDependencies,
result_type=SupportResult, # (9)!
system_prompt=( # (4)!
'You are a support agent in our bank, give the '
'customer support and judge the risk level of their query. '
"Reply using the customer's name."
),
)


@weather_agent.retriever_context # (5)!
async def get_location(
ctx: CallContext[httpx.AsyncClient],
location_description: str,
) -> dict[str, float]:
"""Get the latitude and longitude of a location by its description.""" # (6)!
response = await ctx.deps.get('https://api.geolocation...')
...


@weather_agent.retriever_context # (7)!
async def get_weather(
ctx: CallContext[httpx.AsyncClient],
lat: float,
lng: float,
) -> dict[str, str]:
"""Get the weather at a location by its latitude and longitude."""
response = await ctx.deps.get('https://api.weather...')
...


async def main():
async with httpx.AsyncClient() as client:
result = await weather_agent.run( # (8)!
'What is the weather like in West London and in Wiltshire?',
deps=client,
)
print(result.data) # (9)!
#> The weather in West London is raining, while in Wiltshire it is sunny.

messages = result.all_messages() # (10)!
@support_agent.system_prompt # (5)!
async def add_customer_name(ctx: CallContext[SupportDependencies]) -> str:
customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
return f"The customer's name is {customer_name!r}"


@support_agent.retriever_context # (6)!
async def customer_balance(
ctx: CallContext[SupportDependencies], include_pending: bool
) -> str:
"""Returns the customer's current account balance.""" # (7)!
balance = await ctx.deps.db.customer_balance(
id=ctx.deps.customer_id,
include_pending=include_pending,
)
return f'${balance:.2f}'


... # (11)!


deps = SupportDependencies(customer_id=123, db=DatabaseConn())
result = support_agent.run_sync('What is my balance?', deps=deps) # (8)!
print(result.data) # (10)!
"""
support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1
"""

result = support_agent.run_sync('I just lost my card!', deps=deps)
print(result.data)
"""
support_advice="I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions." block_card=True risk=8
"""
```

1. An agent that can tell users about the weather in a particular location. Agents combine a system prompt, a response type (here `str`) and "retrievers" (aka tools).
2. Here we configure the agent to use OpenAI's GPT-4o model, you can also customise the model when running the agent.
3. We specify the type dependencies for the agent, in this case an HTTP client, which retrievers will use to make requests to external services. PydanticAI's system of dependency injection provides a powerful, type safe way to customise the behaviour of your agents, including for unit tests and evals.
4. Static system prompts can be registered as key word arguments to the agent, dynamic system prompts can be registered with the `@agent.system_prompot` decorator and benefit from dependency injection.
5. Retrievers let you register "tools" which the LLM may call while to respond to a user. You inject dependencies into the retriever with `CallContext`, any other arguments become the tool schema passed to the LLM, Pydantic is used to validate these arguments, errors are passed back to the LLM so it can retry.
6. This docstring is also passed to the LLM as a description of the tool.
7. Multiple retrievers can be registered with the same agent, the LLM can choose which (if any) retrievers to call in order to respond to a user.
8. Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached. You can also run agents synchronously with `run_sync`. Internally agents are all async, so `run_sync` is a helper using `asyncio.run` to call `run()`.
9. The response from the LLM, in this case a `str`, Agents are generic in both the type of `deps` and `result_type`, so calls are typed end-to-end.
10. [`result.all_messages()`](message-history.md) includes details of messages exchanged, this is useful both to understand the conversation that took place and useful if you want to continue the conversation later — messages can be passed back to later `run/run_sync` calls.
1. An [agent](agents.md) that acts as first-tier support in a bank, agents are generic in the type of dependencies they take and the type of result they return, in this case `Deps` and `SupportResult`.
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also customise the model when running the agent.
3. The `SupportDependencies` dataclass is used to pass data and connections into the model that will be needed when running [system prompts](agents.md#system-prompts) and [retrievers](agents.md#retrievers). PydanticAI's system of dependency injection provides a powerful, type safe way to customise the behaviour of your agents, including for unit tests and evals.
4. Static [system prompts](agents.md#system-prompts) can be registered as keyword arguments to the agent
5. dynamic [system prompts](agents.md#system-prompts) can be registered with the `@agent.system_prompot` decorator and benefit from dependency injection.
6. [Retrievers](agents.md#retrievers) let you register "tools" which the LLM may call while responding to a user. You inject dependencies into the retriever with [`CallContext`][pydantic_ai.dependencies.CallContext], any other arguments become the tool schema passed to the LLM, Pydantic is used to validate these arguments, errors are passed back to the LLM so it can retry.
7. The docstring is also passed to the LLM as a description of the tool.
8. [Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM until a final response is reached.
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking.
11. In real use case, you'd add many more retrievers to the agent to extend the context it's equipped with and support it can provide.

!!! tip "Complete `weather_agent.py` example"
This example is incomplete for the sake of brevity; you can find a complete `weather_agent.py` example [here](examples/weather-agent.md).
!!! tip "Complete `bank_support.py` example"
This example is incomplete for the sake of brevity (the definition of `DatabaseConn` is missing); you can find a complete `bank_support.py` example [here](examples/bank-support.md).

## Example — Result Validation
## Next Steps

TODO
To try PydanticAI yourself, follow instructions [in examples](examples/index.md).
4 changes: 3 additions & 1 deletion docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,6 @@ To use Logfire with PydanticAI, install PydanticAI with the `logfire` optional g

From there, follow the [Logfire documentation](https://logfire.pydantic.dev/docs/) to configure Logfire.

TODO screenshot of Logfire with PydanticAI in action.
## Next Steps

To run PydanticAI, follow instructions [in examples](examples/index.md).
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ nav:
- examples/index.md
- examples/pydantic-model.md
- examples/weather-agent.md
- examples/bank-support.md
- examples/sql-gen.md
- examples/rag.md
- examples/stream-markdown.md
Expand Down
Loading

0 comments on commit be83898

Please sign in to comment.