Skip to content

Commit

Permalink
mkdocs
Browse files Browse the repository at this point in the history
  • Loading branch information
ProKil committed Nov 5, 2024
1 parent ece445f commit 106d459
Show file tree
Hide file tree
Showing 13 changed files with 1,034 additions and 0 deletions.
Empty file added docs/api/cli.md
Empty file.
3 changes: 3 additions & 0 deletions docs/api/messages.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
::: aact.Message

::: aact.messages.DataModel
11 changes: 11 additions & 0 deletions docs/api/nodes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
AAct nodes are simply classes which inherit from `Node` and implements different ways of handling and sending messages.

::: aact.Node
options:
show_root_heading: true
merge_init_into_class: false
group_by_category: false

::: aact.NodeFactory
options:
show_root_heading: true
2 changes: 2 additions & 0 deletions docs/assets/aact.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/assets/favicon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 19 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# What is AAct?

AAct is designed for communicating sensors, neural networks, agents, users, and environments.


<details>
<summary>Can you expand on that?</summary>

AAct is a Python library for building asynchronous, actor-based, concurrent systems.
Specifically, it is designed to be used in the context of building systems with
components that communicate with each other but don't block each other.
</details>

## How does AAct work?

AAct is built around the concept of nodes and dataflow, where nodes are self-contained units
which receive messages from input channels, process the messages, and send messages to output channels.
Nodes are connected to each other to form a dataflow graph, where messages flow from one node to another.
Each node runs in its own event loop, and the nodes communicate with each other using Redis Pub/Sub.
66 changes: 66 additions & 0 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Quickstart

## Installation

System requirement:

1. Python 3.10 or higher
2. Redis server

<details>

<summary>Redis installation</summary>

The easiest way to install Redis is to use Docker:
```bash
docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
```
According to your system, you can also install Redis from the official website: https://redis.io/download

Note: we will only require a standard Redis server (without RedisJSON / RedisSearch) in this library.

</details>

```bash
pip install aact
```

<details>
<summary> from source </summary>

```bash
git clone https://github.com/ProKil/aact.git
cd aact
pip install .
```

For power users, please use `uv` for package management.
</details>


## Quick Start Example

Assuming your Redis is hosted on `localhost:6379` using docker.
You can create a `dataflow.toml` file:

```toml
redis_url = "redis://localhost:6379/0" # required

[[nodes]]
node_name = "print"
node_class = "print"

[nodes.node_args.print_channel_types]
"tick/secs/1" = "tick"

[[nodes]]
node_name = "tick"
node_class = "tick"
```

To run the dataflow:
```bash
aact run-dataflow dataflow.toml
```

This will start the `tick` node and the `print` node. The `tick` node sends a message every second to the `print` node, which prints the message to the console.
106 changes: 106 additions & 0 deletions docs/plugins/griffe_doclinks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
from __future__ import annotations

import ast
import re
from functools import partial
from pathlib import Path
from typing import Any

from griffe import Extension, Inspector, ObjectNode, Visitor, get_logger
from griffe import Object as GriffeObject
from pymdownx.slugs import slugify

DOCS_PATH = Path(__file__).parent.parent
slugifier = slugify(case="lower")
logger = get_logger("griffe_docklinks")


def find_heading(content: str, slug: str, file_path: Path) -> tuple[str, int]:
for m in re.finditer("^#+ (.+)", content, flags=re.M):
heading = m.group(1)
h_slug = slugifier(heading, "-")
if h_slug == slug:
return heading, m.end()
raise ValueError(f"heading with slug {slug!r} not found in {file_path}")


def insert_at_top(path: str, api_link: str) -> str:
rel_file = path.rstrip("/") + ".md"
file_path = DOCS_PATH / rel_file
content = file_path.read_text()
second_heading = re.search("^#+ ", content, flags=re.M)
assert second_heading, "unable to find second heading in file"
first_section = content[: second_heading.start()]

if f"[{api_link}]" not in first_section:
logger.debug(
'inserting API link "%s" at the top of %s',
api_link,
file_path.relative_to(DOCS_PATH),
)
file_path.write_text(
'??? api "API Documentation"\n'
f" [`{api_link}`][{api_link}]<br>\n\n"
f"{content}"
)

heading = file_path.stem.replace("_", " ").title()
return f'!!! abstract "Usage Documentation"\n [{heading}](../{rel_file})\n'


def replace_links(m: re.Match[str], *, api_link: str) -> str:
path_group = m.group(1)
if "#" not in path_group:
# no heading id, put the content at the top of the page
return insert_at_top(path_group, api_link)

usage_path, slug = path_group.split("#", 1)
rel_file = usage_path.rstrip("/") + ".md"
file_path = DOCS_PATH / rel_file
content = file_path.read_text()
heading, heading_end = find_heading(content, slug, file_path)

next_heading = re.search("^#+ ", content[heading_end:], flags=re.M)
if next_heading:
next_section = content[heading_end : heading_end + next_heading.start()]
else:
next_section = content[heading_end:]

if f"[{api_link}]" not in next_section:
logger.debug(
'inserting API link "%s" into %s',
api_link,
file_path.relative_to(DOCS_PATH),
)
file_path.write_text(
f"{content[:heading_end]}\n\n"
'??? api "API Documentation"\n'
f" [`{api_link}`][{api_link}]<br>"
f"{content[heading_end:]}"
)

return (
f'!!! abstract "Usage Documentation"\n [{heading}](../{rel_file}#{slug})\n'
)


def update_docstring(obj: GriffeObject) -> str:
return re.sub(
r"usage[\- ]docs: ?https://docs\.pydantic\.dev/.+?/(\S+)",
partial(replace_links, api_link=obj.path),
obj.docstring.value,
flags=re.I,
)


class UpdateDocstringsExtension(Extension):
def on_instance(
self,
*,
node: ast.AST | ObjectNode,
obj: GriffeObject,
agent: Visitor | Inspector,
**kwargs: Any,
) -> None:
if not obj.is_alias and obj.docstring is not None:
obj.docstring.value = update_docstring(obj)
95 changes: 95 additions & 0 deletions docs/usage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
## Usage

### CLI

You can start from CLI and progress to more advanced usages.

1. `aact --help` to see all commands
2. `aact run-dataflow <dataflow_name.toml>` to run a dataflow. Check [Dataflow.toml syntax](#dataflowtoml-syntax)
3. `aact run-node` to run one node in a dataflow.
4. `aact draw-dataflow <dataflow_name_1.toml> <dataflow_name_2.toml> --svg-path <output.svg>` to draw dataflow.


### Customized Node

Here is the minimal knowledge you would need to implement a customized node.

```python
from aact import Node, NodeFactory, Message

@NodeFactory.register("node_name")
class YourNode(Node[your_input_type, your_output_type]):

# event_handler is the only function your **have** to implement
def event_handler(self, input_channel: str, input_message: Message[your_input_type]) -> AsyncIterator[str, Message[your_output_type]]:
match input_channel:
case input_channel_1:
<do_your_stuff>
yield output_channel_1, Message[your_output_type](data=your_output_message)
case input_channel_2:
...

# implement other functions: __init__, _wait_for_input, event_loop, __aenter__, __aexit__

# To run a node without CLI
async with NodeFactory.make("node_name", arg_1, arg_2) as node:
await node.event_loop()
```

## Concepts

There are three important concepts to understand aact.

```mermaid
graph TD
n1[Node 1] -->|channel_1| n2[Node 2]
```

### Nodes

Nodes (`aact.Nodes`) are designed to run in parallel asynchronously. This design is especially useful for deploying the nodes onto different machines.
A node should inherit `aact.Node` class, which extends `pydantic.BaseModel`.

### Channels

Channel is an inherited concept from Redis Pub/Sub. You can think of it as a radio channel.
Multiple publishers (nodes) can publish messages to the same channel, and multiple subscribers (nodes) can subscribe to the same channel.

### Messages

Messages are the data sent through the channels. Each message type is a class in the format of `Message[T]` , where `T` is a subclass or a union of subclasses of `DataModel`.

#### Customized Message Type

If you want to create a new message type, you can create a new class that inherits from `DataModel`.
```python
@DataModelFactory.register("new_type")
class NewType(DataModel):
new_type_data: ... = ...


# For example
@DataModelFactory.register("integer")
class Integer(DataModel):
integer_data: int = Field(default=0)
```

## Dataflow.toml syntax

```toml
redis_url = "redis://..." # required
extra_modules = ["package1.module1", "package2.module2"] # optional

[[nodes]]
node_name = "node_name_1" # A unique name in the dataflow
node_class = "node_class_1" # node_class should match the class name passed into NodeFactory.register

[node.node_args]
node_arg_1 = "value_1"

[[nodes]]
node_name = "node_name_2"
node_class = "node_class_2"

# ...
```
8 changes: 8 additions & 0 deletions docs/why.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
## Why should I use AAct?

1. Non-blocking: the nodes are relatively independent of each other, so if you are waiting for users' input,
you can still process sensor data in the background.
2. Scalable: you can a large number of nodes on one machine or distribute them across multiple machines.
3. Hackable: you can easily design your own nodes and connect them to the existing nodes.
4. Zero-code configuration: the `dataflow.toml` allows you to design the dataflow graph without writing any
Python code.
Loading

0 comments on commit 106d459

Please sign in to comment.