diff --git a/README.md b/README.md index ab93ef7..80c1c42 100644 --- a/README.md +++ b/README.md @@ -5,41 +5,32 @@ Cursor level of AI assistance for Sublime Text. I mean it. -Works with all OpenAI'ish API: [llama.cpp](https://github.com/ggerganov/llama.cpp) server, [ollama](https://ollama.com) or whatever third party LLM hosting. +Works with all OpenAI'ish API: [llama.cpp](https://github.com/ggerganov/llama.cpp) server, [ollama](https://ollama.com) or whatever third party LLM hosting. Claude API support coming soon. -![](static/media/ai_chat_right_phantom.png) +> ![NOTE] +> 5.0.0 release is around the corner! Check out these [release notes](https://github.com/yaroslavyaroslav/OpenAI-sublime-text/blob/develop/messages/5.0.0.md) for details. + +![](static/media/ai_chat_left_full.png) ## Features -- Code manipulation (append, insert and edit) selected code with OpenAI models. -- **Phantoms** Get non-disruptive inline right in view answers from the model. - **Chat mode** powered by whatever model you'd like. -- **gpt-o1 support**. -- **[llama.cpp](https://github.com/ggerganov/llama.cpp)**'s server, **[Ollama](https://ollama.com)** and all the rest OpenAI'ish API compatible. +- **gpt-o3-mini** and **gpt-o1** support. +- **[llama.cpp](https://github.com/ggerganov/llama.cpp)**'s server, **[ollama](https://ollama.com)** and all the rest OpenAI'ish API compatible. - **Dedicated chats histories** and assistant settings for a projects. - **Ability to send whole files** or their parts as a context expanding. +- **Phantoms** Get non-disruptive inline right in view answers from the model. - Markdown syntax with code languages syntax highlight (Chat mode only). -- Server Side Streaming (SSE) (i.e. you don't have to wait for ages till GPT-4 print out something). +- Server Side Streaming (SSE) streaming support. - Status bar various info: model name, mode, sent/received tokens. - Proxy support. -### ChatGPT completion demo - -https://github.com/yaroslavyaroslav/OpenAI-sublime-text/assets/16612247/37b98cc2-e9cd-46a6-ac5d-03845313096b - -> video sped up to 1.7x - ---- - -https://github.com/yaroslavyaroslav/OpenAI-sublime-text/assets/16612247/69f609f3-336d-48e8-a574-3cb7fda5822c - -> video sped up to 1.7x - ## Requirements - Sublime Text 4 - **llama.cpp**, **ollama** installed _OR_ - Remote llm service provider API key, e.g. [OpenAI](https://platform.openai.com) +- Atropic API key [coming soon]. ## Installation @@ -76,7 +67,7 @@ You can separate a chat history and assistant settings for a given project by ap { "settings": { "ai_assistant": { - "cache_prefix": "your_project_name" + "cache_prefix": "/absolute/path/to/project/" } } } @@ -90,12 +81,12 @@ You can add a few things to your request: To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line). -To send the whole file(s) in advance to request you should `super+button1` on them to make all tabs of them to become visible in a **single view group** and then run `[New Message|Chat Model] with Sheets` command as shown on the screen below. Pay attention, that in given example only `README.md` and `4.0.0.md` will be sent to a server, but not a content of the `AI chat`. +To append the whole file(s) to request you should `super+button1` on them to make whole tabs of them to become visible in a **single view group** and then run `OpenAI: Add Sheets to Context` command. Sheets can be deselected with the same command. -![](static/media/file_selection_example.png) +You can check the numbers of added sheets in the status bar and on `"OpenAI: Chat Model Select"` command call in the preview section. + +![](static/media/ai_selector_preview.png) -> [!NOTE] -> It's also doesn't matter whether the file persists on a disc or it's just a virtual buffer with a text in it, if they're selected, their content will be send either way. ### Image handling @@ -112,39 +103,21 @@ It expects an absolute path to image to be selected in a buffer or stored in cli Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view. -1. You can set `"prompt_mode": "phantom"` for AI assistant in its settings. -2. [optional] Select some text to pass in context in to manipulate with. -3. Hit `OpenAI: New Message` or `OpenAI: Chat Model Select` and ask whatever you'd like in popup input pane. -4. Phantom will appear below the cursor position or the beginning of the selection while the streaming LLM answer occurs. -5. You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands. -6. You can hit `ctrl+c` to stop prompting same as with in `panel` mode. - -![](static/media/phantom_example.png) - +1. [optional] Select some text to pass in context in to manipulate with. +2. Pick `Phantom` as an output mode in quick panel `OpenAI: Chat Model Select`. +3. You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands. +4. You can hit `ctrl+c` to stop prompting same as with in `panel` mode. -> [!IMPORTANT] -> Yet this is a standalone mode, i.e. an existing chat history won't be sent to a server on a run. - -> [!NOTE] -> A more detailed manual, including various assistant configuration examples, can be found within the plugin settings. - -> [!WARNING] -> The following in buffer commands are deprecated and will be removed in 5.0 release. -> 1. [DEPRECATED] You can pick one of the following modes: `append`, `replace`, `insert`. They're quite self-descriptive. They should be set up in assistant settings to take effect. -> 2. [DEPRECATED] Select some text (they're useless otherwise) to manipulate with and hit `OpenAI: New Message`. -> 4. [DEPRECATED] The plugin will response accordingly with **appending**, **replacing** or **inserting** some text. +![](static/media/phantom_actions.png) ### Other features ### Open Source models support (llama.cpp, ollama) -1. Replace `"url"` setting of a given model to point to whatever host you're server running on (e.g.`"http://localhost:8080"`). -2. ~~[Optional] Provide a `"token"` if your provider required one.~~ **Temporarily mandatory, see warning below.** +1. Replace `"url"` setting of a given model to point to whatever host you're server running on (e.g.`http://localhost:8080/v1/chat/completions`). +2. Provide a `"token"` if your provider required one. 3. Tweak `"chat_model"` to a model of your choice and you're set. -> [!WARNING] -> Due to a known issue, a token value of 10 or more characters is currently required even for unsecured servers. [More details here.](#workaround-for-64) - > [!NOTE] > You can set both `url` and `token` either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session. @@ -193,7 +166,7 @@ You can setup it up by overriding the proxy property in the `OpenAI completion` > All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so. > [!NOTE] -> This one was initially written at 80% by a GPT3.5 back then. I was there mostly for debugging purposes, rather than digging in into ST API. This is a pure magic, I swear! +> Dedicated to GPT3.5 that one the one who initially written at 80% of this back then. This was felt like a pure magic! [stars]: https://github.com/yaroslavyaroslav/OpenAI-sublime-text/stargazers [img-stars]: static/media/star-on-github.svg diff --git a/messages/5.0.0.md b/messages/5.0.0.md index 483569b..30ac8c7 100644 --- a/messages/5.0.0.md +++ b/messages/5.0.0.md @@ -2,19 +2,19 @@ ## tldr; -I got bored and rewrote the whole thing in rust completely. There's not that much brand new features so far, but a lot of them are coming, since the core of the pacakge is now much more reliable and less tangled. Here it is [btw](https://github.com/yaroslavyaroslav/llm_runner). +I got bored and rewrote the whole thing in rust completely. There's not that much brand new features so far, but a lot of them are coming, since the core of the package is now much more reliable and less tangled. [Here it is btw](https://github.com/yaroslavyaroslav/llm_runner). ## Features 1. The core of the plugin is implemented in rust, thus it has become a way faster and reliable. 2. Context passing enhancement: - - files/sheets passes as references now, i.e. all the changes made within are preserved in the next llm request + - files/sheets passes as references now, i.e. all the changes made within are preserved in the next llm request. - they're togglable now, i.e. you pick those that you want to include, call a command and then is passes all the time along the session until you toggle them back off. - - built in output panels contnet passing, e.g. build systems and lsp diagnostic outputs can be passed with a command. -3. Model picker command now supports nested list flow (i.e. ListInputHandler), thus you can switch between view modes and the models on the fly. `"prompt_mode"` in model settings is ignored and can be deleted. -4. AssistantSettings now provides `"api_type"`, where the options is `"plain_text"`, `"open_ai"` and `"antropic"` (not implemented). This is the ground work already done to provide claude and all the rest of the custom services support in thr nearest future. Please take a look at the asssitant settings part if you're curious about the details. + - built in output panels content passing, e.g. build systems and lsp diagnostic outputs can be passed with a command. +3. Model picker command now supports nested list flow (i.e. `ListInputHandler`), thus you can switch between view modes and the models on the fly. `"prompt_mode"` in model setting is ignored and can be deleted. +4. `AssistantSettings` now provides `"api_type"`, where the options is `"plain_text"`[default], `"open_ai"` and `"antropic"`[not implemented]. This is the ground work already done to provide Claude and all the rest of the custom services support in the nearest future. Please take a look at the assistant settings part if you're curious about the details. 5. Chat history and picked model now can be stored in arbitrary folder. -6. Functions support[not implemented yet], there're few built in functions provided to allow model to manage the code. +6. Functions support, there are few built in functions provided to allow model to manage the code [replace_text_with_another_text, replace_text_for_whole_file, read_region_content, get_working_directory_content]. ## Installation diff --git a/static/media/ai_chat_left_full.png b/static/media/ai_chat_left_full.png new file mode 100644 index 0000000..181c27a Binary files /dev/null and b/static/media/ai_chat_left_full.png differ diff --git a/static/media/ai_selector_preview.png b/static/media/ai_selector_preview.png new file mode 100644 index 0000000..8820f79 Binary files /dev/null and b/static/media/ai_selector_preview.png differ diff --git a/static/media/editing_thumbnail.png b/static/media/editing_thumbnail.png deleted file mode 100644 index 6369655..0000000 Binary files a/static/media/editing_thumbnail.png and /dev/null differ diff --git a/static/media/file_selection_example.png b/static/media/file_selection_example.png deleted file mode 100644 index 1c645ed..0000000 Binary files a/static/media/file_selection_example.png and /dev/null differ diff --git a/static/media/panel_thumbnail.png b/static/media/panel_thumbnail.png deleted file mode 100644 index c156b4b..0000000 Binary files a/static/media/panel_thumbnail.png and /dev/null differ diff --git a/static/media/phantom_actions.png b/static/media/phantom_actions.png new file mode 100644 index 0000000..c199a9a Binary files /dev/null and b/static/media/phantom_actions.png differ