Skip to content

Commit

Permalink
Settings updated
Browse files Browse the repository at this point in the history
  • Loading branch information
yaroslavyaroslav committed Feb 7, 2025
1 parent 2298a56 commit 1e112b1
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 26 deletions.
12 changes: 6 additions & 6 deletions messages/5.0.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,10 @@ I got bored and rewrote the whole thing in rust completely. There's not that muc
- files/sheets passes as references now, i.e. all the changes made within are preserved in the next llm request
- they're togglable now, i.e. you pick those that you want to include, call a command and then is passes all the time along the session until you toggle them back off.
- built in output panels contnet passing, e.g. build systems and lsp diagnostic outputs can be passed with a command.
3. AssistantSettings now provides `"api_type"`, where the options is `"plain_text"`, `"open_ai"` and `"antropic"` (not implemented). This is the ground work already done to provide claude and all the rest of the custom services support in thr nearest future. Please take a look at the asssitant settings part if you're curious about the details.
4. Chat history and picked model now can be stored in arbitrary folder.
5. Functions support[not implemented yet], there're few built in functions provided to allow model to manage the code.
3. Model picker command now supports nested list flow (i.e. ListInputHandler), thus you can switch between view modes and the models on the fly. `"prompt_mode"` in model settings is ignored and can be deleted.
4. AssistantSettings now provides `"api_type"`, where the options is `"plain_text"`, `"open_ai"` and `"antropic"` (not implemented). This is the ground work already done to provide claude and all the rest of the custom services support in thr nearest future. Please take a look at the asssitant settings part if you're curious about the details.
5. Chat history and picked model now can be stored in arbitrary folder.
6. Functions support[not implemented yet], there're few built in functions provided to allow model to manage the code.

## Installation

Expand All @@ -33,6 +34,5 @@ You have to switch to beta branch in package control settings for this package.

1. Claude/deepseek/gemini support
2. View mode goodies implementation, better chat structure, code blocks quick actions, history management.
2. Input panel to output panel for request replacement.
4. Fancy picker pane support.
3. Antropic [MCP implementation](https://docs.anthropic.com/en/docs/build-with-claude/mcp)
3. Input panel to output panel for request replacement.
4. Antropic [MCP implementation](https://docs.anthropic.com/en/docs/build-with-claude/mcp)
26 changes: 6 additions & 20 deletions openAI.sublime-settings
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
{
// URL for OpenAI compatiable APIs.
// It must start with http:// or https://, which selects protocol for connection. Use http:// when using localhost.
// Selected parts of code and prompt will be sent to that URL, so make sure, you have all necessary permission to do so.
// Example: "http://localhost:11434" (assuming Ollama is running on localhost)
"url": "https://api.openai.com",
// Example: "http://localhost:11434/v1/chat/completions" (assuming Ollama is running on localhost)
// Full url has to be provided
"url": "https://api.openai.com/v1/chat/completions",

// Your openAI token
// Token can be anything so long as it is at least 10 characters long.
Expand Down Expand Up @@ -52,7 +52,7 @@
// "name",
// "output_mode",
// "chat_model",
// "sheets"
// "sheets" // number of sheets selected as context
],

// Proxy setting
Expand All @@ -75,11 +75,6 @@
// A string that will presented in command palette.
"name": "Example", // **REQUIRED**

// Mode of how plugin should prompts its output, available options:
// - `view`: model response will be output in either separate view or output panel, if chat view doesn't exist in the given window.
// - `phantom`: llm prompts in phantom in a last active view in a non-dstruptive to the buffer content way, each such phantom provides useful commands to handle output further.
"output_mode": "view", // **REQUIRED**

// The model that generates the chat completion.
// Generally here should be either "gpt-4o-latest|gpt-4o-mini" or their specified versions.
// If using custom API, refer to their documentation for supported models.
Expand All @@ -101,7 +96,7 @@

// Your whaterver service token
// OpenAI and any other alike API token to put.
"token": "",
"token": "dummy_token",

// Toggle for function calls llm capability
// Check if your llm supports this feature before toggling this on
Expand All @@ -127,7 +122,7 @@
"max_completion_tokens": 4096,

// `"api_type": "open_ai"` only
// The matter of efforts reasoning models to put into the answer
// The matter of efforts reasoning models to put in an answer
// - "low"
// - "medium"
// - "high"
Expand Down Expand Up @@ -170,7 +165,6 @@
// Examples
{
"name": "General Assistant Localhost",
"output_mode": "panel",
"url": "http://127.0.0.1:8080", // See ma, no internet connection.
"token": "",
"chat_model": "Llama-3-8b-Q4-chat-hf",
Expand All @@ -180,14 +174,6 @@
},
{
"name": "General Assistant",
"output_mode": "panel",
"chat_model": "gpt-4o-mini",
"assistant_role": "1. You are to provide clear, concise, and direct responses.\n2. Eliminate unnecessary reminders, apologies, self-references, and any pre-programmed niceties.\n3. Maintain a casual tone in your communication.\n4. Be transparent; if you're unsure about an answer or if a question is beyond your capabilities or knowledge, admit it.\n5. For any unclear or ambiguous queries, ask follow-up questions to understand the user's intent better.\n6. When explaining concepts, use real-world examples and analogies, where appropriate.\n7. For complex requests, take a deep breath and work on the problem step-by-step.\n8. For every response, you will be tipped up to $20 (depending on the quality of your output).\n10. Always look closely to **ALL** the data provided by a user. It's very important to look so closely as you can there. Ppl can die otherways.\n11. If user strictly asks you about to write the code, write the code first, without explanation, and add them only by additional user request.",
"max_tokens": 4000,
},
{
"name": "General Assistant",
"output_mode": "phantom",
"chat_model": "gpt-4o-mini",
"assistant_role": "1. You are to provide clear, concise, and direct responses.\n2. Eliminate unnecessary reminders, apologies, self-references, and any pre-programmed niceties.\n3. Maintain a casual tone in your communication.\n4. Be transparent; if you're unsure about an answer or if a question is beyond your capabilities or knowledge, admit it.\n5. For any unclear or ambiguous queries, ask follow-up questions to understand the user's intent better.\n6. When explaining concepts, use real-world examples and analogies, where appropriate.\n7. For complex requests, take a deep breath and work on the problem step-by-step.\n8. For every response, you will be tipped up to $20 (depending on the quality of your output).\n10. Always look closely to **ALL** the data provided by a user. It's very important to look so closely as you can there. Ppl can die otherways.\n11. If user strictly asks you about to write the code, write the code first, without explanation, and add them only by additional user request.",
"max_tokens": 4000,
Expand Down

0 comments on commit 1e112b1

Please sign in to comment.