Skip to content

Commit

Permalink
update documentation for 0.3.0 release
Browse files Browse the repository at this point in the history
  • Loading branch information
krohling committed Dec 17, 2023
1 parent 3d5e8ca commit 21354b9
Show file tree
Hide file tree
Showing 39 changed files with 752 additions and 391 deletions.
18 changes: 12 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
<a href="https://pypi.org/project/bondai/"><img src="https://img.shields.io/pypi/v/bondai" alt="PyPI"></a>
<a href="https://hub.docker.com/r/krohling/bondai"><img src="https://img.shields.io/docker/v/krohling/bondai?logo=docker" alt="Docker"></a>
</p>
<p align="center"><em>Meet BondAI, an open source, AI-powered assistant with a lightweight, versatile API for seamless integration into your own applications.</em></p>
<p align="center"><em>Build highly capable Single and Multi-Agent Systems.</em></p>

# <a href="https://bondai.dev">BondAI Homepage</a>

Expand Down Expand Up @@ -45,11 +45,18 @@ export OPENAI_API_KEY=sk-XXXXXXXXXX
Once the environment variable has been set you can run `bondai` to start the CLI.

```bash
% bondai
Loading BondAI...
Skipping Gmail tools because gmail-token.pickle file is not present.
******************ENTERING CHAT******************
You are entering a chat with BondAI...
You can exit any time by typing 'exit'.

Hello! How can I assist you today?
Hello! I'm BondAI, your friendly and helpful assistant. I'm here to assist you with any tasks or questions you might have. How can I assist you today?

I want you to write a story about unicorns and save it to a file named unicorns.md.
Using tool file_write: Writing a story about unicorns and saving it to a file named unicorns.md
Using tool final_answer...

A story about unicorns has been successfully written and saved to a file named unicorns.md. The story is set in an enchanted forest and describes the magical and majestic nature of unicorns, their daily routines, and their harmonious relationship with other creatures in the forest.
```


Expand All @@ -74,7 +81,7 @@ docker run -it --rm \
BondAI has a straightforward API for creating powerful AI Agents. Check out our [examples](https://bondai.dev/docs/category/examples) for ideas on how to get started. Remember to set your *OPENAI_API_KEY* environment variable before running your BondAI Agent.

```python
from bondai import Agent
from bondai.agents import Agent
from bondai.tools.search import DuckDuckGoSearchTool
from bondai.tools.website import WebsiteQueryTool
from bondai.tools.file import FileWriteTool
Expand Down Expand Up @@ -107,4 +114,3 @@ BondAI comes out of the box with a powerful set of integrations.
| <img src="assets/logos/postgres-logo.jpeg" alt="postgres logo" width="75"/> | **PostgreSQL** | BondAI can automatically extract the schema from a Postgres DB and process natural language queries. |
| <img src="assets/logos/blandai-logo.jpeg" alt="bland.ai logo" width="50"/> | **Bland AI** | Allows BondAI to make phone calls and process/retrieve call transcripts. [Requires a Bland.ai account.](https://www.bland.ai/) |
| <img src="assets/logos/gmail-logo.png" alt="gmail logo" width="50"/> | **Gmail** | Allows BondAI to search and read emails. |
| <img src="assets/logos/langchain-logo.jpeg" alt="langchain logo" width="50"/> | **LangChain** | Use BondAI's LangChainTool class to import any tool from LangChain into BondAI. |
Binary file modified assets/bondai-logo-square.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified assets/bondai-logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions bondai/agents/conversational_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,7 @@ def __init__(
enable_conversational_content_responses
)
self._enable_conversation_tools = enable_conversation_tools
print("***", self._enable_conversation_tools)
if self._enable_conversation_tools:
self.add_tool(SendMessageTool())
if self._enable_exit_conversation:
Expand Down
11 changes: 3 additions & 8 deletions bondai/api/routes.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ def send_message(agent_id):

data = request.get_json()
message = data.get("message", None)
require_response = data.get("require_response", None)
if not message:
return "message is required.", 400

Expand All @@ -51,12 +50,8 @@ def get_agent(agent_id):
abort(404)
return jsonify(agent.to_dict())

@server.app.route("/agents/<agent_id>/tool_options", methods=["GET"])
def get_agent_tool_options(agent_id):
agent = server.get_agent_by_id(agent_id)
if not agent:
abort(404)

@server.app.route("/tools", methods=["GET"])
def get_tool_options():
data = [t.get_tool_function() for t in tool_options]
return jsonify(data)

Expand Down Expand Up @@ -92,7 +87,7 @@ def add_agent_tool(agent_id):
return jsonify({"status": "success"})

@server.app.route("/agents/<agent_id>/tools/<tool_name>", methods=["DELETE"])
def remove_agent_tools(agent_id, tool_name):
def remove_agent_tool(agent_id, tool_name):
agent = server.get_agent_by_id(agent_id)
if not agent:
abort(404)
Expand Down
9 changes: 7 additions & 2 deletions bondai/cli/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,9 @@ def build_agents(llm: LLM) -> GroupConversation:
max_tool_retries=5,
memory_manager=MemoryManager(
core_memory_datasource=InMemoryCoreMemoryDataSource(
sections={"task": "No information is stored about the current task."},
sections={
"task": "No information has been stored about the current task."
},
max_section_size=10000,
)
),
Expand All @@ -101,10 +103,13 @@ def build_agents(llm: LLM) -> GroupConversation:
persona_summary=user_liaison_profile.PERSONA_SUMMARY,
instructions=user_liaison_profile.INSTRUCTIONS,
tools=[AgentTool(task_execution_agent)],
enable_conversation_tools=False,
enable_conversational_content_responses=True,
allow_exit=False,
memory_manager=MemoryManager(
core_memory_datasource=PersistentCoreMemoryDataSource(
file_path="./.memory/user_liason_core_memory.json",
sections={"user": "No information is stored about the user."},
sections={"user": "No information has been stored about the user."},
)
),
)
Expand Down
200 changes: 119 additions & 81 deletions bondai/models/openai/default_openai_connection_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,87 +7,6 @@
dalle_connection_params = None
embeddings_connection_params = None

if os.environ.get(OPENAI_CONNECTION_TYPE_ENV_VAR) == "azure":
try:
gpt_4_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=os.environ.get(AZURE_OPENAI_GPT4_API_KEY_ENV_VAR),
api_version=os.environ.get(
AZURE_OPENAI_GPT4_API_VERSION_ENV_VAR, "2023-07-01-preview"
),
azure_endpoint=os.environ.get(AZURE_OPENAI_GPT4_API_BASE_ENV_VAR),
azure_deployment=os.environ.get(AZURE_OPENAI_GPT4_DEPLOYMENT_ENV_VAR),
)
except ValueError:
pass

try:
gpt_35_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=os.environ.get(AZURE_OPENAI_GPT35_API_KEY_ENV_VAR),
api_version=os.environ.get(
AZURE_OPENAI_GPT35_API_VERSION_ENV_VAR, "2023-07-01-preview"
),
azure_endpoint=os.environ.get(AZURE_OPENAI_GPT35_API_BASE_ENV_VAR),
azure_deployment=os.environ.get(AZURE_OPENAI_GPT35_DEPLOYMENT_ENV_VAR),
)
except ValueError:
pass

try:
dalle_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=os.environ.get(AZURE_OPENAI_DALLE_API_KEY_ENV_VAR),
api_version=os.environ.get(AZURE_OPENAI_DALLE_API_VERSION_ENV_VAR),
azure_endpoint=os.environ.get(AZURE_OPENAI_DALLE_API_BASE_ENV_VAR),
azure_deployment=os.environ.get(AZURE_OPENAI_DALLE_DEPLOYMENT_ENV_VAR),
)
except ValueError:
pass

try:
embeddings_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=os.environ.get(AZURE_OPENAI_EMBEDDINGS_API_KEY_ENV_VAR),
api_version=os.environ.get(AZURE_OPENAI_EMBEDDINGS_API_VERSION_ENV_VAR),
azure_endpoint=os.environ.get(AZURE_OPENAI_EMBEDDINGS_API_BASE_ENV_VAR),
azure_deployment=os.environ.get(AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_ENV_VAR),
)
except ValueError:
pass
else:
try:
gpt_4_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.OPENAI,
api_key=os.environ.get(OPENAI_API_KEY_ENV_VAR),
)
except ValueError:
pass

try:
gpt_35_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.OPENAI,
api_key=os.environ.get(OPENAI_API_KEY_ENV_VAR),
)
except ValueError:
pass

try:
dalle_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.OPENAI,
api_key=os.environ.get(OPENAI_API_KEY_ENV_VAR),
)
except ValueError:
pass

try:
embeddings_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.OPENAI,
api_key=os.environ.get(OPENAI_API_KEY_ENV_VAR),
)
except ValueError:
pass


def configure_openai_connection(api_key: str):
global gpt_4_connection_params
Expand Down Expand Up @@ -126,3 +45,122 @@ def configure_openai_connection(api_key: str):
connection_type=OpenAIConnectionType.OPENAI,
api_key=api_key,
)


def configure_azure_connection(
gpt_4_api_key: str | None = None,
gpt_4_api_version: str | None = None,
gpt_4_azure_endpoint: str | None = None,
gpt_4_azure_deployment: str | None = None,
gpt_35_api_key: str | None = None,
gpt_35_api_version: str | None = None,
gpt_35_azure_endpoint: str | None = None,
gpt_35_azure_deployment: str | None = None,
dalle_api_key: str | None = None,
dalle_api_version: str | None = None,
dalle_azure_endpoint: str | None = None,
dalle_azure_deployment: str | None = None,
embeddings_api_key: str | None = None,
embeddings_api_version: str | None = None,
embeddings_azure_endpoint: str | None = None,
embeddings_azure_deployment: str | None = None,
):
global gpt_4_connection_params
global gpt_35_connection_params
global dalle_connection_params
global embeddings_connection_params

if (
gpt_4_api_key
and gpt_4_api_version
and gpt_4_azure_endpoint
and gpt_4_azure_deployment
):
gpt_4_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=gpt_4_api_key,
api_version=gpt_4_api_version,
azure_endpoint=gpt_4_azure_endpoint,
azure_deployment=gpt_4_azure_deployment,
)

if (
gpt_35_api_key
and gpt_35_api_version
and gpt_35_azure_endpoint
and gpt_35_azure_deployment
):
gpt_35_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=gpt_35_api_key,
api_version=gpt_35_api_version,
azure_endpoint=gpt_35_azure_endpoint,
azure_deployment=gpt_35_azure_deployment,
)

if (
dalle_api_key
and dalle_api_version
and dalle_azure_endpoint
and dalle_azure_deployment
):
dalle_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=dalle_api_key,
api_version=dalle_api_version,
azure_endpoint=dalle_azure_endpoint,
azure_deployment=dalle_azure_deployment,
)

if (
embeddings_api_key
and embeddings_api_version
and embeddings_azure_endpoint
and embeddings_azure_deployment
):
embeddings_connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key=embeddings_api_key,
api_version=embeddings_api_version,
azure_endpoint=embeddings_azure_endpoint,
azure_deployment=embeddings_azure_deployment,
)


if os.environ.get(OPENAI_CONNECTION_TYPE_ENV_VAR) == "azure":
try:
configure_azure_connection(
gpt_4_api_key=os.environ.get(AZURE_OPENAI_GPT4_API_KEY_ENV_VAR),
gpt_4_api_version=os.environ.get(AZURE_OPENAI_GPT4_API_VERSION_ENV_VAR),
gpt_4_azure_endpoint=os.environ.get(AZURE_OPENAI_GPT4_API_BASE_ENV_VAR),
gpt_4_azure_deployment=os.environ.get(AZURE_OPENAI_GPT4_DEPLOYMENT_ENV_VAR),
gpt_35_api_key=os.environ.get(AZURE_OPENAI_GPT35_API_KEY_ENV_VAR),
gpt_35_api_version=os.environ.get(AZURE_OPENAI_GPT35_API_VERSION_ENV_VAR),
gpt_35_azure_endpoint=os.environ.get(AZURE_OPENAI_GPT35_API_BASE_ENV_VAR),
gpt_35_azure_deployment=os.environ.get(
AZURE_OPENAI_GPT35_DEPLOYMENT_ENV_VAR
),
dalle_api_key=os.environ.get(AZURE_OPENAI_DALLE_API_KEY_ENV_VAR),
dalle_api_version=os.environ.get(AZURE_OPENAI_DALLE_API_VERSION_ENV_VAR),
dalle_azure_endpoint=os.environ.get(AZURE_OPENAI_DALLE_API_BASE_ENV_VAR),
dalle_azure_deployment=os.environ.get(
AZURE_OPENAI_DALLE_DEPLOYMENT_ENV_VAR
),
embeddings_api_key=os.environ.get(AZURE_OPENAI_EMBEDDINGS_API_KEY_ENV_VAR),
embeddings_api_version=os.environ.get(
AZURE_OPENAI_EMBEDDINGS_API_VERSION_ENV_VAR
),
embeddings_azure_endpoint=os.environ.get(
AZURE_OPENAI_EMBEDDINGS_API_BASE_ENV_VAR
),
embeddings_azure_deployment=os.environ.get(
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_ENV_VAR
),
)
except ValueError:
pass
else:
try:
configure_openai_connection(os.environ.get(OPENAI_API_KEY_ENV_VAR))
except ValueError:
pass
2 changes: 1 addition & 1 deletion docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@ RUN apt-get update && apt-get install -y \
python3 \
python3-pip

RUN pip3 install --no-cache bondai==0.3.0b17
RUN pip3 install --no-cache bondai==0.3.0b18

CMD ["bondai"]
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@
def handle_streaming_content_updated(agent_id, content_buffer):
if agent_id != agent["id"]:
return
print(content_buffer)
# print(content_buffer)


@client.on("streaming_function_updated")
def handle_streaming_function_updated(agent_id, function_name, arguments_buffer):
if agent_id != agent["id"]:
return
print(function_name)
print(arguments_buffer)
# print(function_name)
# print(arguments_buffer)


@client.on("agent_message")
Expand Down
42 changes: 42 additions & 0 deletions tests/getting-started/example-1.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# from bondai.agents import Agent
# from bondai.models.openai import DefaultOpenAIConnectionParams
# from bondai.tools.search import DuckDuckGoSearchTool
# from bondai.tools.website import WebsiteQueryTool
# from bondai.tools.file import FileWriteTool

# DefaultOpenAIConnectionParams.configure_openai_connection(api_key="<OPENAI-API-KEY>")


# task = """I want you to research the usage of Metformin as a drug to treat aging and aging related illness.
# You should only use reputable information sources, ideally peer reviewed scientific studies.
# I want you to summarize your findings in a document named metformin.md and includes links to reference and resources you used to find the information.
# Additionally, the last section of your document you should provide a recommendation for a 43 year old male, in good health and who regularly exercises as to whether he would benefit from taking Metformin.
# You should explain your recommendation and justify it with sources.
# Finally, you should highlight potential risks and tradeoffs from taking the medication."""

# Agent(tools=[
# DuckDuckGoSearchTool(),
# WebsiteQueryTool(),
# FileWriteTool()
# ]).run(task)

from bondai.agents import Agent
from bondai.models.openai import (
OpenAILLM,
OpenAIConnectionParams,
OpenAIConnectionType,
OpenAIModelNames,
)


connection_params = OpenAIConnectionParams(
connection_type=OpenAIConnectionType.AZURE,
api_key="",
api_version="",
azure_endpoint="",
azure_deployment="",
)

llm = OpenAILLM(model=OpenAIModelNames.GPT4_32K, connection_params=connection_params)

agent = Agent(llm=llm)
Loading

0 comments on commit 21354b9

Please sign in to comment.