Skip to content

Commit

Permalink
move api docs into main documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
samuelcolvin committed Nov 16, 2024
1 parent 7ba9af8 commit a7f2791
Show file tree
Hide file tree
Showing 22 changed files with 105 additions and 93 deletions.
40 changes: 28 additions & 12 deletions docs/concepts/agents.md → docs/agents.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
## Introduction

Agents are PydanticAI's primary interface for interacting with models.
Agents are PydanticAI's primary interface for interacting with LLMs.

In some use cases a single Agent will control an entire application or component,
but agents can also interact to embody more complex workflows.
but multiple agents can also interact to embody more complex workflows.

The [`Agent`][pydantic_ai.Agent] class is well documented, but in essence you can think of an agent as a container for:

* A [system prompt](#system-prompts) — a set of instructions for the LLM written by the developer
* One or more [retrievers](#retrievers) — functions that the LLM may call to get information while generating a response
* An optional structured [result type](results.md) — the structured datatype the LLM must return at the end of a run
* A [dependency](dependencies.md) type constraint — system prompt functions, retrievers and result validators may all use dependencies when they're run
* Agents may optionally also have a default [model](#TODO) associated with them, the model to use can also be defined when running the agent
* Agents may optionally also have a default [model](models/index.md) associated with them, the model to use can also be defined when running the agent

In typing terms, agents are generic in their dependency and result types, e.g. an agent which required `#!python Foobar` dependencies and returned data of type `#!python list[str]` results would have type `#!python Agent[Foobar, list[str]]`.

Expand Down Expand Up @@ -51,7 +51,7 @@ print(result.data)
1. Create an agent, which expects an integer dependency and returns a boolean result, this agent will ahve type of `#!python Agent[int, bool]`.
2. Define a retriever that checks if the square is a winner, here [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the dependency type `int`, if you got the dependency type wrong you'd get a typing error.
3. In reality, you might want to use a random number here e.g. `random.randint(0, 36)` here.
4. `result.data` will be a boolean indicating if the square is a winner, Pydantic performs the result validation
4. `result.data` will be a boolean indicating if the square is a winner, Pydantic performs the result validation, it'll be typed as a `bool` since its type is derived from the `result_type` generic parameter of the agent.

!!! tip "Agents are Singletons, like FastAPI"
Agents are a singleton instance, you can think of them as similar to a small [`FastAPI`][fastapi.FastAPI] app or an [`APIRouter`][fastapi.APIRouter].
Expand All @@ -60,9 +60,9 @@ print(result.data)

There are three ways to run an agent:

1. [`#!python agent.run()`][pydantic_ai.Agent.run] — a coroutine which returns a result containing a completed response
2. [`#!python agent.run_sync()`][pydantic_ai.Agent.run_sync] — a plain function which returns a result containing a completed response (internally, this just calls `#!python asyncio.run(self.run())`)
3. [`#!python agent.run_stream()`][pydantic_ai.Agent.run_stream] — a coroutine which returns a result containing methods to stream a response as an async iterable
1. [`#!python agent.run()`][pydantic_ai.Agent.run] — a coroutine which returns a result containing a completed response, returns a [`RunResult`][pydantic_ai.result.RunResult]
2. [`#!python agent.run_sync()`][pydantic_ai.Agent.run_sync] — a plain function which returns a result containing a completed response (internally, this just calls `#!python asyncio.run(self.run())`), returns a [`RunResult`][pydantic_ai.result.RunResult]
3. [`#!python agent.run_stream()`][pydantic_ai.Agent.run_stream] — a coroutine which returns a result containing methods to stream a response as an async iterable, returns a [`StreamedRunResult`][pydantic_ai.result.StreamedRunResult]

Here's a simple example demonstrating all three:

Expand Down Expand Up @@ -127,7 +127,7 @@ You can add both to a single agent; they're concatenated in the order they're de

Here's an example using both types of system prompts:

```python title="system_prompt_example.py"
```python title="system_prompts.py"
from datetime import date

from pydantic_ai import Agent, CallContext
Expand All @@ -153,6 +153,7 @@ result = agent.run_sync('What is the date?', deps='Frank')
print(result.data)
#> Hello Frank, the date today is 2032-01-02.
```

1. The agent expects a string dependency.
2. Static system prompt defined at agent creation time.
3. Dynamic system prompt defined via a decorator.
Expand Down Expand Up @@ -181,7 +182,22 @@ print(result.data)
* show an except of a `UnexpectedModelBehaviour` being raised
* if a `UnexpectedModelBehaviour` is raised, you may want to access the [`.last_run_messages`][pydantic_ai.Agent.last_run_messages] attribute of an agent to see the messages exchanged that led to the error, show an example of accessing `.last_run_messages` in an except block to get more details

instructions:
* all code examples should be complete
* keep your tone fairly informal like the rest of the documentation
* be concise, avoid abstract verbose explanations
## API Reference

::: pydantic_ai.Agent
options:
members:
- __init__
- run
- run_sync
- run_stream
- model
- override_deps
- override_model
- last_run_messages
- system_prompt
- retriever_plain
- retriever_context
- result_validator

::: pydantic_ai.exceptions
17 changes: 0 additions & 17 deletions docs/api/agent.md

This file was deleted.

3 changes: 0 additions & 3 deletions docs/api/dependencies.md

This file was deleted.

3 changes: 0 additions & 3 deletions docs/api/exceptions.md

This file was deleted.

17 changes: 0 additions & 17 deletions docs/api/messages.md

This file was deleted.

Empty file removed docs/concepts/results.md
Empty file.
Empty file removed docs/concepts/testing-evals.md
Empty file.
14 changes: 9 additions & 5 deletions docs/concepts/dependencies.md → docs/dependencies.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Dependencies

PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [retrievers](agents.md#retrievers) and [result validators](results.md#TODO).
PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [retrievers](agents.md#retrievers) and [result validators](results.md#result-validators).

Matching PydanticAI's design philosophy, our dependency system tries to use existing best practice in Python development rather than inventing esoteric "magic", this should make dependencies type-safe, understandable easier to test and ultimately easier to deploy in production.

Expand Down Expand Up @@ -159,7 +159,7 @@ _(This example is complete, it can be run "as is")_

## Full Example

As well as system prompts, dependencies can be used in [retrievers](agents.md#retrievers) and [result validators](results.md#TODO).
As well as system prompts, dependencies can be used in [retrievers](agents.md#retrievers) and [result validators](results.md#result-validators).

```python title="full_example.py" hl_lines="27-35 38-48"
from dataclasses import dataclass
Expand Down Expand Up @@ -339,6 +339,10 @@ print(result.data)

The following examples demonstrate how to use dependencies in PydanticAI:

- [Weather Agent](../examples/weather-agent.md)
- [SQL Generation](../examples/sql-gen.md)
- [RAG](../examples/rag.md)
- [Weather Agent](examples/weather-agent.md)
- [SQL Generation](examples/sql-gen.md)
- [RAG](examples/rag.md)

## API Reference

::: pydantic_ai.dependencies
2 changes: 1 addition & 1 deletion docs/examples/chat-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Simple chat app example build with FastAPI.

Demonstrates:

* [reusing chat history](../concepts/message-history.md)
* [reusing chat history](../message-history.md)
* serializing messages
* streaming responses

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/rag.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ RAG search example. This demo allows you to ask question of the [logfire](https:
Demonstrates:

* retrievers
* [agent dependencies](../concepts/dependencies.md)
* [agent dependencies](../dependencies.md)
* RAG search

This is done by creating a database containing each section of the markdown documentation, then registering
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/sql-gen.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Demonstrates:
* custom `result_type`
* dynamic system prompt
* result validation
* [agent dependencies](../concepts/dependencies.md)
* [agent dependencies](../dependencies.md)

## Running the Example

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/weather-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Demonstrates:

* retrievers
* multiple retrievers
* [agent dependencies](../concepts/dependencies.md)
* [agent dependencies](../dependencies.md)

In this case the idea is a "weather" agent — the user can ask for the weather in multiple locations,
the agent will use the `get_lat_lng` tool to get the latitude and longitude of the locations, then use
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ async def main():
7. Multiple retrievers can be registered with the same agent, the LLM can choose which (if any) retrievers to call in order to respond to a user.
8. Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached. You can also run agents synchronously with `run_sync`. Internally agents are all async, so `run_sync` is a helper using `asyncio.run` to call `run()`.
9. The response from the LLM, in this case a `str`, Agents are generic in both the type of `deps` and `result_type`, so calls are typed end-to-end.
10. [`result.all_messages()`](concepts/message-history.md) includes details of messages exchanged, this is useful both to understand the conversation that took place and useful if you want to continue the conversation later — messages can be passed back to later `run/run_sync` calls.
10. [`result.all_messages()`](message-history.md) includes details of messages exchanged, this is useful both to understand the conversation that took place and useful if you want to continue the conversation later — messages can be passed back to later `run/run_sync` calls.

!!! tip "Complete `weather_agent.py` example"
This example is incomplete for the sake of brevity; you can find a complete `weather_agent.py` example [here](examples/weather-agent.md).
Expand Down
26 changes: 19 additions & 7 deletions docs/concepts/message-history.md → docs/message-history.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,7 @@
from pydantic_ai_examples.pydantic_model import model

# Messages and chat history

PydanticAI provides access to messages exchanged during an agent run. These messages can be used both to continue a coherent conversation, and to understand how an agent performed.

## Messages types

[API documentation for `messages`][pydantic_ai.messages] contains details of the message types and their meaning.

### Accessing Messages from Results

After running an agent, you can access the messages exchanged during that run from the `result` object.
Expand Down Expand Up @@ -271,4 +265,22 @@ print(result2.all_messages())

## Examples

For a more complete example of using messages in conversations, see the [chat app](../examples/chat-app.md) example.
For a more complete example of using messages in conversations, see the [chat app](examples/chat-app.md) example.

## API Reference

::: pydantic_ai.messages
options:
members:
- Message
- SystemPrompt
- UserPrompt
- ToolReturn
- RetryPrompt
- ModelAnyResponse
- ModelTextResponse
- ModelStructuredResponse
- ToolCall
- ArgsJson
- ArgsObject
- MessagesTypeAdapter
2 changes: 1 addition & 1 deletion docs/api/models/function.md → docs/models/function.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# `pydantic_ai.models.function`
# FunctionModel

::: pydantic_ai.models.function
2 changes: 1 addition & 1 deletion docs/api/models/gemini.md → docs/models/gemini.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# `pydantic_ai.models.gemini`
# Gemini

::: pydantic_ai.models.gemini
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/api/models/openai.md → docs/models/openai.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# `pydantic_ai.models.openai`
# OpenAI

::: pydantic_ai.models.openai
2 changes: 1 addition & 1 deletion docs/api/models/test.md → docs/models/test.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# `pydantic_ai.models.test`
# TestModel

::: pydantic_ai.models.test
18 changes: 17 additions & 1 deletion docs/api/result.md → docs/results.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,20 @@
# `pydantic_ai.result`
## Ending runs

TODO

## Result Validators

TODO

## Streamed Results

TODO

## Cost

TODO

## API Reference

::: pydantic_ai.result
options:
Expand Down
8 changes: 8 additions & 0 deletions docs/testing-evals.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Testing and Evals

TODO

principles:

* unit tests are no different to any other app, just `TestModel` or `FunctionModel`, we know how to do unit tests, there's no magic just good practice
* evals are more like benchmarks, they never "pass" although they do "fail", you care mostly about how they change over time, we (and we think most other people) don't really know what a "good" eval is, we provide some useful tools, we'll improve this if/when a common best practice emerges, or we think we have something interesting to say
34 changes: 15 additions & 19 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,18 @@ nav:
- Introduction:
- Introduction: index.md
- install.md
- Concepts:
- concepts/agents.md
- concepts/dependencies.md
- concepts/results.md
- concepts/message-history.md
- concepts/testing-evals.md
- Documentation:
- agents.md
- dependencies.md
- results.md
- message-history.md
- testing-evals.md
- Models:
- models/index.md
- models/openai.md
- models/gemini.md
- models/test.md
- models/function.md
- Examples:
- examples/index.md
- examples/pydantic-model.md
Expand All @@ -28,17 +34,6 @@ nav:
- examples/stream-markdown.md
- examples/stream-whales.md
- examples/chat-app.md
- API Reference:
- api/agent.md
- api/result.md
- api/messages.md
- api/dependencies.md
- api/exceptions.md
- api/models/base.md
- api/models/openai.md
- api/models/gemini.md
- api/models/test.md
- api/models/function.md

extra:
# hide the "Made with Material for MkDocs" message
Expand Down Expand Up @@ -76,7 +71,7 @@ theme:
- content.code.copy
- content.code.select
- navigation.path
- navigation.expand
# - navigation.expand
- navigation.indexes
- navigation.sections
- navigation.tracking
Expand Down Expand Up @@ -145,7 +140,8 @@ plugins:
show_signature_annotations: true
signature_crossrefs: true
group_by_category: false
heading_level: 2
# 3 because docs are in pages with an H2 just above them
heading_level: 3
import:
- url: https://docs.python.org/3/objects.inv
- url: https://docs.pydantic.dev/latest/objects.inv
Expand Down

0 comments on commit a7f2791

Please sign in to comment.