Skip to content

Releases: av/harbor

v0.1.23

15 Sep 14:42
@av av
Compare
Choose a tag to compare

v0.1.23 - harbor history

Harbor remembers a number of most recently executed CLI commands. You can search/re-run the commands via the harbor history command.

This is an addition to the native history in your shell, that'll persist longer and is specific to the Harbor CLI.

asciinema recording of the history command

Use history.size config option to adjust the number of commands stored in the history.

# Set current history size
harbor history size 50

History is stored in the .history file in the Harbor workspace, you can also edit/access it manually.

# Using a built-in helper
harbor history ls | grep ollama
# Manually, using the file
cat $(harbor home)/.history | grep ollama

You can clear the history with the harbor history clear command.

# Clear the history
harbor history clear
# Empty
harbor history

Full Changelog: v0.1.22...v0.1.23

v0.1.22

14 Sep 21:30
@av av
Compare
Choose a tag to compare

v0.1.22 - JupyterLab intergration

# [Optional] pre-build the image
harbor build jupyter

# Start the service
harbor up jupyter

# Open JupyterLab in the browser
harbor open jupyter

Your notebooks are stored in the Harbor workspace, under the jupyter directory.

# Opens workspace folder in the File Mangager
harbor jupyter workspace

# See workspace location,
# relative to $(harbor home)
harbor config get juptyer.workspace

Additionally, you can configure service to install additional packages.

# See deps help
# It's a manager for underlying array
harbor jupyter deps -h

# Add packages to install, supports the same
# specifier syntax as pip
harbor jupyter deps add numpy
harobr jupyter deps add SomeProject@git+https://git.repo/some_pkg.git@1.3.1
harbor jupyter deps add SomePackage[PDF,EPUB]==3.1.4

Full Changelog: v0.1.21...v0.1.22

v0.1.21

14 Sep 12:02
@av av
Compare
Choose a tag to compare

v0.1.21 - Harbor profiles

Profiles is a way to save/load a complete configuration for the specific task. For example, to quickly switch between the models that take a few commands to configure. Profiles include all options that can be set via harbor config (which is aliased by most of the CLI helpers).

Usage
harbor
  profile|profiles|p [ls|rm|add] - Manage Harbor profiles
    profile ls|list             - List all profiles
    profile rm|remove <name>    - Remove a profile
    profile add|save <name>     - Add current config as a profile
    profile set|use|load <name> - Use a profile

There are a few considerations when using profiles:

  • When the profile is loaded, modifications are not saved by default and will be lost when switching to another profile (or reloading the current one). Use harbor profile save <name> to persist the changes after making them
  • Profiles are stored in the Harbor workspace and can be shared between different Harbor instances
  • Profiles are not versioned and are not guaranteed to work between different Harbor versions
  • You can also edit profiles as .env files in the workspace, it's not necessary to use the CLI
Example
# 1. Switch to the default for a "clean" state
harbor profile use default

# 2. Configure services as needed
harbor defaults remove ollama
harbor defaults add llamacpp
harbor llamacpp model https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf
harbor llamacpp args -ngl 99 --ctx-size 8192 -np 4 -ctk q8_0 -ctv q8_0 -fa

# 3. Save profile for future use
harbor profile add cpp8b

# 4. Up - runs in the background
harbor up

# 5. Adjust args - no parallelism, no kv quantization, no flash attention
# These changes are not saved in "cpp8b"
harbor llamacpp args -ngl 99 --ctx-size 2048

# 6. Save another profile
harbor profile add cpp8b-smart

# 7. Restart with "smart" settings
harbor profile use cpp8b-smart
harbor restart llamacpp

# 8. Switch between created profiles
harbor profile use default
harbor profile use cpp8b-smart
harbor profile use cpp8b

Full Changelog: v0.1.20...v0.1.21

v0.1.20

13 Sep 14:11
@av av
Compare
Choose a tag to compare

v0.1.20 - SGLang integration

logo

PyPI PyPI - Downloads license issue resolution open issues

SGLang is a fast serving framework for large language models and vision language models.

Starting

# [Optional] Pre-pull the image
harbor pull sglang

# Download with HF CLI
harbor hf download google/gemma-2-2b-it

# Set the model to run using HF specifier
harbor sglang model google/gemma-2-2b-it

# See original CLI help for available options
harbor run sglang --help

# Set the extra arguments via "harbor args"
harbor sglang args --context-length 2048 --disable-cuda-graph

Full Changelog: v0.1.19...v0.1.20

v0.1.19

13 Sep 10:31
@av av
Compare
Choose a tag to compare

v0.1.19 - lm-evaluation-harness integration

DOI

This project provides a unified framework to test generative language models on a large number of different evaluation tasks.

Starting

# [Optional] pre-build the image
harbor build lmeval

Refer to the configuration for Harbor services

# Run evals
harbor lmeval --tasks gsm8k,hellaswag

# Open results folder
harbor lmeval results

Full Changelog: v0.1.18...v0.1.19

v0.1.18

12 Sep 16:54
@av av
Compare
Choose a tag to compare

v0.1.18

This is another maintenance release mainly focused on the bench functionality

  • vllm is bumped to v0.6.0 by default, harbor now also uses a version with bitsandbytes pre-installed (run harbor build vllm to pre-build it)
  • bench - judge prompt, eval log, exp. backoff for the LLM
  • CheeseBench is out, smells good though

Full Changelog: v0.1.17...v0.1.18

v0.1.17

09 Sep 22:55
@av av
Compare
Choose a tag to compare

v0.1.17

This is a maintenance and bugfixes release without new service integrations.

  • bench service fixes
    • correctly handling interrupts
    • fixing broken API key support for the LLM and the Judge
  • bench now renders a simple HTML report
  • bench now records task completion time
  • Breaking change harbor bench is now harbor bench run
  • aphrodite - switching to 0.6.0 release images (different docker repo, changed internal port)
  • aphrodite - configurable version
  • #12 fixed - using nvidia-container-toolkit presence for nvidia detection, instead of the docker runtimes check

Full Changelog: v0.1.16...v0.1.17

v0.1.16

08 Sep 22:52
@av av
Compare
Choose a tag to compare

v0.1.16 bench

screenshot of apache superset with the data from bench

Something new this time - not an integration, but rather a custom-built service for Harbor.

bench is a built-in benchmark service for measuring the quality of LLMs. It has a few specific design goals in mind:

  • Work with OpenAI-compatible APIs (not running LLMs on its own)
  • Benchmark tasks and success criteria are defined by you
  • Focused on chat/instruction tasks
# [Optional] pre-build the image
harbor build bench

# Run the benchmark
# --name is required to give this run a meaningful name
harbor bench --name bench

# Open the results (folder)
harbor bench results

harbor doctor

A very lightweight troubleshooting utility

user@os:~/code/harbor$ ▼ h doctor
00:52:24 [INFO] Running Harbor Doctor...
00:52:24 [INFO] ✔ Docker is installed and running
00:52:24 [INFO] ✔ Docker Compose is installed
00:52:24 [INFO] ✔ .env file exists and is readable
00:52:24 [INFO] ✔ default.env file exists and is readable
00:52:24 [INFO] ✔ Harbor workspace directory exists
00:52:24 [INFO] ✔ CLI is linked
00:52:24 [INFO] Harbor Doctor checks completed successfully.

Full Changelog: v0.1.15...v0.1.16

v0.1.15

07 Sep 14:23
@av av
Compare
Choose a tag to compare

omnichain integration

Handle: omnichain
URL: http://localhost:34081

Efficient visual programming for AI language models.

omnichain UI screenshot

Starting

# [Optional] pre-build the image
harbor build omnichain

# Start the service
harbor up omnichain

# [Optional] Open the UI
harbor open omnichain

Harbor runs a custom version of omnichain that is compatible with webui. See example workflow (Chat about Harbor CLI) in the service docs.

Misc

  • webui config cleanup
  • Instructions for copilot in Harbor repo
  • Fixing workspace for bionicgpt service: missing gitignore, fixfs routine

Full Changelog: v0.1.14...v0.1.15

v0.1.14

05 Sep 22:00
@av av
Compare
Choose a tag to compare

Lobe Chat integration

Lobe Chat splash image

Lobe Chat - an open-source, modern-design AI chat framework.

Starting

# Will start lobechat alongside
# the default webui
harbor up lobechat

If you want to make LobeChat your default UI, please see the information below:

# Replace the default webui with lobechat
# afterwards, you can just run `harbor up`
harbor defaults rm webui
harbor defaults add lobechat

Note

LobeChat supports only a list of predefined models for Ollama that can't be pre-configured and has to be selected from the UI at runtime

Misc

  • half-baked autogpt service, not documented as it's not integrated with the any of the harbor services due to its implementation
  • Updating harbor how prompt to reflect on recent releases
  • Harbor User Guide - high-level user documentation

Full Changelog: v0.1.13...v0.1.14