diff --git a/Default.sublime-commands b/Default.sublime-commands
index 7754e1d..d6d95fc 100644
--- a/Default.sublime-commands
+++ b/Default.sublime-commands
@@ -1,20 +1,41 @@
[
{
- "caption": "OpenAI Complete",
+ "caption": "OpenAI: Complete",
"command": "openai",
"args": {
"mode": "completion"
}
},
{
- "caption": "OpenAI Insert",
+ "caption": "OpenAI: New Message",
+ "command": "openai",
+ "args": {
+ "mode": "chat_completion"
+ }
+ },
+ {
+ "caption": "OpenAI: Reset Chat History",
+ "command": "openai",
+ "args": {
+ "mode": "reset_chat_history"
+ }
+ },
+ {
+ "caption": "OpenAI: Refresh Chat",
+ "command": "openai",
+ "args": {
+ "mode": "refresh_output_panel"
+ }
+ },
+ {
+ "caption": "OpenAI: Insert",
"command": "openai",
"args": {
"mode": "insertion"
}
},
{
- "caption": "OpenAI Edit",
+ "caption": "OpenAI: Edit",
"command": "openai",
"args": {
"mode": "edition"
diff --git a/README.md b/README.md
index bd4069e..72c1e11 100644
--- a/README.md
+++ b/README.md
@@ -7,19 +7,40 @@ OpenAI Completion is a Sublime Text 4 plugin that uses the OpenAI natural langua
- Append suggested text to selected code
- Insert suggested text instead of placeholder in selected code
- Edit selected code according to a given command
+- **ChatGPT mode support**.
+- [Multi]Markdown syntax with syntax highlight support (ChatGPT mode only).
+- Proxy support.
+- **GPT-4 support**.
+
+### ChatGPT completion demo
-### Demo
Click to see screens
-
+
+
---
-
+
---
-
+
+
+
+
+### Simple completion demo
+
+Click to see screens
+
+
---
-
+
+
+---
+
+
+---
+
+
## Requirements
@@ -29,24 +50,73 @@ OpenAI Completion is a Sublime Text 4 plugin that uses the OpenAI natural langua
- Internet connection
## Usage
+
+### ChatGPT usage
+
+ChatGPT mode works the following way:
+1. Run the `OpenAI: New Message` command
+2. Wait until OpenAI performs a response (be VERY patient in the case of the GPT-4 model it's way slower than you could imagine).
+3. On the Response plugin opens the `OpenAI completion` output panel with the whole log of your chat at [any] active Window.
+4. If you would like to fetch chat history to another window manually, you can do that by running the `OpenAI: Refresh Chat` command.
+5. When you're done or want to start all over you should run the `OpenAI: Reset Chat History` command, which deletes the chat cache.
+
+> **Note**
+> You can bind both of the most usable commands `OpenAI: New Message` and `OpenAI: Show output panel`, to do that please follow `Settings` -> `Package Control` -> `OpenAI completion` -> `Key Bindings`.
+
+> **Note**
+> As for now there's just a single history instance. I guess this limitation would disappear sometime.
+
+### Single shot completion usage
+
1. Open the Sublime Text 4 editor and select some code.
-2. Open the command palette and run the `OpenAI Append`, `OpenAI Insert`, or `OpenAI Edit` command.
- - To use the `OpenAI Insert` command, the selected code should include a placeholder `[insert]`. This can be modified in the settings.
+2. Open the command palette and run the `OpenAI: Complete`, `OpenAI: Insert`, or `OpenAI: Edit` commands.
+ - To use the `OpenAI: Insert` command, the selected code should include a placeholder `[insert]`. This can be modified in the settings.
3. **The plugin will send the selected code to the OpenAI servers**, using your API key, to generate a suggestion for editing the code.
4. The suggestion will modify the selected code in the editor, according to the command you ran (append, insert, or edit).
+### Other features
+
+### [Multi]Markdown syntax with syntax highlight support
+
+ChatGPT output panel supports markdown syntax highlight. It should just work (if it's not please report an issue).
+
+Although it's highly recommended to install the [`MultimarkdownEditing`](https://sublimetext-markdown.github.io/MarkdownEditing/) to apply syntax highlighting for code snippets provided by ChatGPT. `OpenAI completion` should just pick it up implicitly for the output panel content.
+
+### Proxy support
+
+That's it. Now you can set up a proxy for this plugin.
+You can setup it up by overriding the proxy property in the `OpenAI completion` settings like follow:
+
+```json
+"proxy": {
+ "address": "127.0.0.1",
+ "port": 9898
+}
+```
+
+### GPT-4 support
+
+> **Note**
+> You have to have access to the `GPT-4` model within your account, to use that feature.
+
+It should just work, just set the `chat_model` setting to `GPT-4`. Please be patient while working with it. (1) It's **very** slow and (2) an answer would appear only after it finishes its prompt. It could take up to 10 seconds easily.
+
+
## Settings
The OpenAI Completion plugin has a settings file where you can set your OpenAI API key. This is required for the plugin to work. To set your API key, open the settings within `Preferences` -> `Package Settings` -> `OpenAI` -> `Settings` and paste your API key in the token property, as follows:
+
```JSON
{
"token": "sk-your-token",
}
```
-## Note
-Please note that OpenAI is a paid service, and you will need to have an API key and sufficient credit to use the plugin.
+## Disclaimers
+
+> **Note**
+> Please note that OpenAI is a paid service, and you will need to have an API key and sufficient credit to use this plugin.
-Additionally, **all selected code will be sent to the OpenAI servers for processing, so make sure you have the necessary permissions to do so**.
+> **Warning**
+> **All selected code will be sent to the OpenAI servers for processing, so make sure you have all necessary permissions to do so**.
-## Disclamer
-This one was at 80% written by that thing itself including this readme. I was here mostly for debugging purposes, rather then designing and researching. This is pure magic, i swear.
+> This one was at 80% written by that thing itself including this readme. I was here mostly for debugging purposes, rather than designing and researching. This is pure magic, I swear.
diff --git a/cacher.py b/cacher.py
new file mode 100644
index 0000000..8c4adef
--- /dev/null
+++ b/cacher.py
@@ -0,0 +1,46 @@
+import sublime
+import os
+from . import jl_utility as jl
+
+
+class Cacher():
+ def __init__(self) -> None:
+ cache_dir = sublime.cache_path()
+ plugin_cache_dir = os.path.join(cache_dir, "OpenAI completion")
+ if not os.path.exists(plugin_cache_dir):
+ os.makedirs(plugin_cache_dir)
+
+ # Create the file path to store the data
+ self.history_file = os.path.join(plugin_cache_dir, "chat_history.jl")
+
+ def read_all(self):
+ json_objects = []
+ reader = jl.reader(self.history_file)
+ for json_object in reader:
+ json_objects.append(json_object)
+
+ return json_objects
+
+ def append_to_cache(self, cache_lines):
+ # Create a new JSON Lines writer for output.jl
+ writer = jl.writer(self.history_file)
+ next(writer)
+ writer.send(cache_lines[0])
+ # for line in cache_lines:
+ # writer.send(line)
+
+ def drop_first(self, number = 4):
+ # Read all lines from the JSON Lines file
+ with open(self.history_file, "r") as file:
+ lines = file.readlines()
+
+ # Remove the specified number of lines from the beginning
+ lines = lines[number:]
+
+ # Write the remaining lines back to the cache file
+ with open(self.history_file, "w") as file:
+ file.writelines(lines)
+
+ def drop_all(self):
+ with open(self.history_file, "w") as _:
+ pass # Truncate the file by opening it in 'w' mode and doing nothing
diff --git a/jl_utility.py b/jl_utility.py
new file mode 100644
index 0000000..76e0ca8
--- /dev/null
+++ b/jl_utility.py
@@ -0,0 +1,38 @@
+import json
+from typing import Iterator, Generator
+
+
+def reader(fname: str) -> Iterator[dict]:
+ with open(fname) as file:
+ for line in file:
+ obj = json.loads(line.strip())
+ yield obj
+
+
+def writer(fname: str, mode: str = 'a') -> Generator[None, dict, None]:
+ with open(fname, mode) as file:
+ while True:
+ obj = yield
+ line = json.dumps(obj, ensure_ascii=False)
+ file.write(f"{line}\n")
+
+
+# if __name__ == "__main__":
+# # Read employees from employees.jl
+# reader = jl_reader("employees.jl")
+
+# # Create a new JSON Lines writer for output.jl
+# writer = jl_writer("output.jl")
+# next(writer)
+
+# for employee in reader:
+# id = employee["id"]
+# name = employee["name"]
+# dept = employee["department"]
+# print(f"#{id} - {name} ({dept})")
+
+# # Write the employee data to output.jl
+# writer.send(employee)
+
+# # Close the writer
+# writer.close()
\ No newline at end of file
diff --git a/messages.json b/messages.json
new file mode 100644
index 0000000..7446147
--- /dev/null
+++ b/messages.json
@@ -0,0 +1,3 @@
+{
+ "2.0.0": "messages/2.0.0.txt"
+}
\ No newline at end of file
diff --git a/messages/2.0.0.txt b/messages/2.0.0.txt
new file mode 100644
index 0000000..8842c5c
--- /dev/null
+++ b/messages/2.0.0.txt
@@ -0,0 +1,50 @@
+=> 2.0.0
+
+# Features summary
+- ChatGPT mode support.
+- [Multi]Markdown syntax with syntax highlight support (ChatGPT mode only).
+- Proxy support.
+- GPT-4 support.
+
+## ChatGPT mode
+
+ChatGPT mode works the following way:
+1. Run the `OpenAI: New Message` command
+2. Wait until OpenAI performs a response (be VERY patient in the case of the GPT-4 model it's way slower than you could imagine).
+3. On the Response plugin opens the `OpenAI completion` output panel with the whole log of your chat at [any] active Window.
+4. If you would like to fetch chat history to another window manually, you can do that by running the `OpenAI: Refresh Chat` command.
+5. When you're done or want to start all over you should run the `OpenAI: Reset Chat History` command, which deletes the chat cache.
+
+> You can bind both of the most usable commands `OpenAI: New Message` and `OpenAI: Show output panel`, to do that please follow `Settings`->`Package Control`->`OpenAI completion`->`Key Bindings`.
+
+> As for now there's just a single history instance. I guess this limitation would disappear sometime, but highly likely it wouldn't be soon.
+
+## [Multi]Markdown syntax with syntax highlight support (ChatGPT mode only).
+
+ChatGPT output panel supports markdown syntax highlight. It should just work (if it's not please report an issue).
+
+Although it's highly recommended to install the [`MultimarkdownEditing`](https://sublimetext-markdown.github.io/MarkdownEditing/) to apply syntax highlighting for code snippets provided by ChatGPT. `OpenAI completion` should just pick it up implicitly for the output panel content.
+
+## Proxy support
+
+That's it. Now you can set up a proxy for this plugin.
+You can setup it up by overriding the proxy property in the `OpenAI completion` settings like follow:
+
+```json
+// Proxy setting
+"proxy": {
+ // Proxy address
+ "address": "127.0.0.1",
+
+ // Proxy port
+ "port": 9898
+}
+```
+
+## GPT-4 support
+
+It should just work, just set the `chat_model` setting to `GPT-4`. Please be patient while working with it. (1) It's **very** slow and an answer would appear only after it finishes its prompt. It could take up to 10 seconds easily.
+
+## Disclaimer
+
+Unfortunately, this version hasn't got covered with comprehensive testing, so there could be bugs. Please report them, so I'd be happy to release a patch.
\ No newline at end of file
diff --git a/messages/install.txt b/messages/install.txt
new file mode 100644
index 0000000..da1b781
--- /dev/null
+++ b/messages/install.txt
@@ -0,0 +1,48 @@
+# Features summary
+- ChatGPT mode support.
+- [Multi]Markdown syntax with syntax highlight support (ChatGPT mode only).
+- Proxy support.
+- GPT-4 support.
+
+## ChatGPT mode
+
+ChatGPT mode works the following way:
+1. Run the `OpenAI: New Message` command
+2. Wait until OpenAI performs a response (be VERY patient in the case of the GPT-4 model it's way slower than you could imagine).
+3. On the Response plugin opens the `OpenAI completion` output panel with the whole log of your chat at [any] active Window.
+4. If you would like to fetch chat history to another window manually, you can do that by running the `OpenAI: Refresh Chat` command.
+5. When you're done or want to start all over you should run the `OpenAI: Reset Chat History` command, which deletes the chat cache.
+
+> You can bind both of the most usable commands `OpenAI: New Message` and `OpenAI: Show output panel`, to do that please follow `Settings`->`Package Control`->`OpenAI completion`->`Key Bindings`.
+
+> As for now there's just a single history instance. I guess this limitation would disappear sometime, but highly likely it wouldn't be soon.
+
+## [Multi]Markdown syntax with syntax highlight support (ChatGPT mode only).
+
+ChatGPT output panel supports markdown syntax highlight. It should just work (if it's not please report an issue).
+
+Although it's highly recommended to install the [`MultimarkdownEditing`](https://sublimetext-markdown.github.io/MarkdownEditing/) to apply syntax highlighting for code snippets provided by ChatGPT. `OpenAI completion` should just pick it up implicitly for the output panel content.
+
+## Proxy support
+
+That's it. Now you can set up a proxy for this plugin.
+You can setup it up by overriding the proxy property in the `OpenAI completion` settings like follow:
+
+```json
+// Proxy setting
+"proxy": {
+ // Proxy address
+ "address": "127.0.0.1",
+
+ // Proxy port
+ "port": 9898
+}
+```
+
+## GPT-4 support
+
+It should just work, just set the `chat_model` setting to `GPT-4`. Please be patient while working with it. (1) It's **very** slow and an answer would appear only after it finishes its prompt. It could take up to 10 seconds easily.
+
+## Disclaimer
+
+Unfortunately, this version hasn't got covered with comprehensive testing, so there could be bugs. Please report them, so I'd be happy to release a patch.
\ No newline at end of file
diff --git a/openAI.sublime-settings b/openAI.sublime-settings
index d170c7c..88e0c32 100644
--- a/openAI.sublime-settings
+++ b/openAI.sublime-settings
@@ -1,10 +1,22 @@
{
+ // The model which will generate the code edition.
+ // Some models are suitable for natural language tasks, others specialize in code.
+ // Learn more at https://beta.openai.com/docs/models
+ // ____Affects only edition mode.____
+ "edit_model": "code-davinci-edit-001",
+
// The model which will generate the completion.
// Some models are suitable for natural language tasks, others specialize in code.
// Learn more at https://beta.openai.com/docs/models
- // Does not affect editing mode.
+ // ____Affects only completion and inserting modes.____
"model": "text-davinci-003",
+ // The model which will generate the chat completion.
+ // Some models are suitable for natural language tasks, others specialize in code.
+ // Learn more at https://beta.openai.com/docs/models
+ // ____Affects only chat completion mode___
+ "chat_model": "gpt-3.5-turbo",
+
// Controls randomness: Lowering results in less random completions.
// As the temperature approaches zero, the model will become deterministic and repetitive.
"temperature": 0.7,
@@ -36,17 +48,21 @@
// Your openAI token
"token": "",
- // Ask the AI to format its answers with multimarkdown markup.
- // By "ask", I mean it: it will literally add "format the answer with multimarkdown markup" to the question.
- // Affects only `completion` command.
- "multimarkdown": false,
-
- // Manages where to print the output of the completion command:
- // false — print into the editor
- // true — print into separate output panel (named "OpenAI")
- "output_panel": false,
+ // Apply Sublime Text markdown syntax highligh to OpenAI completion output panel text.
+ // Affects only `chat_completion` command.
+ // `MultimarkdownEditing` package highly recommended to apply syntax highlight for a code snippets.
+ "markdown": true,
// Minimum amount of characters selected to perform completion.
// Does not affect completion command if the "output_panel" setting is true.
- "minimum_selection_length": 20
-}
+ "minimum_selection_length": 20,
+
+ // Proxy setting
+ "proxy": {
+ // Proxy address
+ "address": "",
+
+ // Proxy port
+ "port": 8080
+ }
+}
\ No newline at end of file
diff --git a/openai.py b/openai.py
index 6c434a2..843022e 100644
--- a/openai.py
+++ b/openai.py
@@ -1,184 +1,15 @@
import sublime, sublime_plugin
import functools
-import http.client
-import threading
-import json
+from .cacher import Cacher
import logging
-
-
-class OpenAIWorker(threading.Thread):
- def __init__(self, edit, region, text, view, mode, command):
- self.edit = edit
- self.region = region
- self.text = text
- self.view = view
- self.mode = mode
- self.command = command # optional
- self.settings = sublime.load_settings("openAI.sublime-settings")
- super(OpenAIWorker, self).__init__()
-
- def prompt_completion(self, completion):
- completion = completion.replace("$", "\$")
- if self.mode == 'insertion':
- result = self.view.find(self.settings.get('placeholder'), 0, 1)
- if result:
- self.view.sel().clear()
- self.view.sel().add(result)
- # Replace the placeholder with the specified replacement text
- self.view.run_command("insert_snippet", {"contents": completion})
- return
-
- if self.mode == 'completion':
- if self.settings.get('output_panel'):
- window = sublime.active_window()
-
- output_view = window.find_output_panel("OpenAI") if window.find_output_panel("OpenAI") != None else window.create_output_panel("OpenAI")
- output_view.run_command('append', {'characters': f'## {self.text}'})
- output_view.run_command('append', {'characters': '\n------------'})
- output_view.run_command('append', {'characters': completion})
- output_view.run_command('append', {'characters': '\n============\n\n'})
- window.run_command("show_panel", {"panel": "output.OpenAI"})
- else:
- region = self.view.sel()[0]
- if region.a <= region.b:
- region.a = region.b
- else:
- region.b = region.a
-
- self.view.sel().clear()
- self.view.sel().add(region)
- # Replace the placeholder with the specified replacement text
- self.view.run_command("insert_snippet", {"contents": completion})
- return
-
- if self.mode == 'edition': # it's just replacing all given text for now.
- region = self.view.sel()[0]
- self.view.run_command("insert_snippet", {"contents": completion})
- return
-
- def exec_net_request(self, connect: http.client.HTTPSConnection):
- try:
- res = connect.getresponse()
- data = res.read()
- status = res.status
- data_decoded = data.decode('utf-8')
- connect.close()
- completion = json.loads(data_decoded)['choices'][0]['text']
- self.prompt_completion(completion)
- except KeyError:
- sublime.error_message("Exception\n" + "The OpenAI response could not be decoded. There could be a problem on their side. Please look in the console for additional error info.")
- logging.exception("Exception: " + str(data_decoded))
- return
-
- except Exception as ex:
- sublime.error_message(f"Server Error: {str(status)}\n{ex}")
- return
-
- def complete(self):
- conn = http.client.HTTPSConnection("api.openai.com")
- payload = {
- "prompt": self.text,
- "model": self.settings.get("model"),
- "temperature": self.settings.get("temperature"),
- "max_tokens": self.settings.get("max_tokens"),
- "top_p": self.settings.get("top_p"),
- "frequency_penalty": self.settings.get("frequency_penalty"),
- "presence_penalty": self.settings.get("presence_penalty")
- }
- json_payload = json.dumps(payload)
-
- token = self.settings.get('token')
-
-
- headers = {
- 'Content-Type': "application/json",
- 'Authorization': 'Bearer {}'.format(token),
- 'cache-control': "no-cache",
- }
- conn.request("POST", "/v1/completions", json_payload, headers)
- self.exec_net_request(connect=conn)
-
- def insert(self):
- conn = http.client.HTTPSConnection("api.openai.com")
- parts = self.text.split(self.settings.get('placeholder'))
- try:
- if not len(parts) == 2:
- raise AssertionError("There is no placeholder '" + self.settings.get('placeholder') + "' within the selected text. There should be exactly one.")
- except Exception as ex:
- sublime.error_message("Exception\n" + str(ex))
- logging.exception("Exception: " + str(ex))
- return
-
- payload = {
- "model": self.settings.get("model"),
- "prompt": parts[0],
- "suffix": parts[1],
- "temperature": self.settings.get("temperature"),
- "max_tokens": self.settings.get("max_tokens"),
- "top_p": self.settings.get("top_p"),
- "frequency_penalty": self.settings.get("frequency_penalty"),
- "presence_penalty": self.settings.get("presence_penalty")
- }
- json_payload = json.dumps(payload)
-
- token = self.settings.get('token')
-
- headers = {
- 'Content-Type': "application/json",
- 'Authorization': 'Bearer {}'.format(token),
- 'cache-control': "no-cache",
- }
- conn.request("POST", "/v1/completions", json_payload, headers)
- self.exec_net_request(connect=conn)
-
- def edit_f(self):
- conn = http.client.HTTPSConnection("api.openai.com")
- payload = {
- "model": "code-davinci-edit-001", # could be text-davinci-edit-001
- "input": self.text,
- "instruction": self.command,
- "temperature": self.settings.get("temperature"),
- "top_p": self.settings.get("top_p"),
- }
- json_payload = json.dumps(payload)
-
- token = self.settings.get('token')
-
- headers = {
- 'Content-Type': "application/json",
- 'Authorization': 'Bearer {}'.format(token),
- 'cache-control': "no-cache",
- }
- conn.request("POST", "/v1/edits", json_payload, headers)
- self.exec_net_request(connect=conn)
-
- def run(self):
- try:
- if (self.settings.get("max_tokens") + len(self.text)) > 4000:
- raise AssertionError("OpenAI accepts max. 4000 tokens, so the selected text and the max_tokens setting must be lower than 4000.")
- if not self.settings.has("token"):
- raise AssertionError("No token provided, you have to set the OpenAI token into the settings to make things work.")
- token = self.settings.get('token')
- if len(token) < 10:
- raise AssertionError("No token provided, you have to set the OpenAI token into the settings to make things work.")
- except Exception as ex:
- sublime.error_message("Exception\n" + str(ex))
- logging.exception("Exception: " + str(ex))
- return
-
- if self.mode == 'insertion': self.insert()
- if self.mode == 'edition': self.edit_f()
- if self.mode == 'completion':
- if self.settings.get('output_panel'):
- self.text = self.command
- if self.settings.get('multimarkdown'):
- self.text += ' format the answer with multimarkdown markup'
- self.complete()
+from .openai_worker import OpenAIWorker
class Openai(sublime_plugin.TextCommand):
- def on_input(self, edit, region, text, view, mode, input):
- worker_thread = OpenAIWorker(edit, region, text, view, mode=mode, command=input)
+ def on_input(self, region, text, view, mode, input):
+ from .openai_worker import OpenAIWorker # https://stackoverflow.com/a/52927102
+
+ worker_thread = OpenAIWorker(region, text, view, mode=mode, command=input)
worker_thread.start()
"""
@@ -188,7 +19,7 @@ def on_input(self, edit, region, text, view, mode, input):
"""
def run(self, edit, **kwargs):
settings = sublime.load_settings("openAI.sublime-settings")
- mode = kwargs.get('mode', 'completion')
+ mode = kwargs.get('mode', 'chat_completion')
# get selected text
region = ''
@@ -198,12 +29,10 @@ def run(self, edit, **kwargs):
text = self.view.substr(region)
+ # Cheching that user select some text
try:
if region.__len__() < settings.get("minimum_selection_length"):
- if mode == 'completion':
- if not settings.get('output_panel'):
- raise AssertionError("Not enough text selected to complete the request, please expand the selection.")
- else:
+ if mode != 'chat_completion' and mode != 'reset_chat_history' and mode != 'refresh_output_panel':
raise AssertionError("Not enough text selected to complete the request, please expand the selection.")
except Exception as ex:
sublime.error_message("Exception\n" + str(ex))
@@ -211,15 +40,30 @@ def run(self, edit, **kwargs):
return
if mode == 'edition':
- sublime.active_window().show_input_panel("Request: ", "Comment the given code line by line", functools.partial(self.on_input, edit, region, text, self.view, mode), None, None)
-
+ sublime.active_window().show_input_panel("Request: ", "Comment the given code line by line", functools.partial(self.on_input, region, text, self.view, mode), None, None)
elif mode == 'insertion':
- worker_thread = OpenAIWorker(edit, region, text, self.view, mode, "")
+ worker_thread = OpenAIWorker(region, text, self.view, mode, "")
+ worker_thread.start()
+ elif mode == 'completion': # mode == `completion`
+ worker_thread = OpenAIWorker(region, text, self.view, mode, "")
worker_thread.start()
- else: # mode == `completion`
- if settings.get('output_panel'):
- sublime.active_window().show_input_panel("Question: ", "", functools.partial(self.on_input, edit, region, text, self.view, mode), None, None)
- else:
- worker_thread = OpenAIWorker(edit, region, text, self.view, mode, "")
- worker_thread.start()
+ elif mode == 'reset_chat_history':
+ Cacher().drop_all()
+ output_panel = sublime.active_window().find_output_panel("OpenAI Chat")
+ output_panel.set_read_only(False)
+ region = sublime.Region(0, output_panel.size())
+ output_panel.erase(edit, region)
+ output_panel.set_read_only(True)
+ elif mode == 'refresh_output_panel':
+ from .outputpanel import SharedOutputPanelListener
+ window = sublime.active_window()
+ listner = SharedOutputPanelListener()
+ listner.refresh_output_panel(
+ window=window,
+ markdown=settings.get('markdown'),
+ syntax_path=settings.get('syntax_path')
+ )
+ listner.show_panel(window=window)
+ else: # mode 'chat_completion', always in panel
+ sublime.active_window().show_input_panel("Question: ", "", functools.partial(self.on_input, "region", "text", self.view, mode), None, None)
diff --git a/openai_worker.py b/openai_worker.py
new file mode 100644
index 0000000..427f6ef
--- /dev/null
+++ b/openai_worker.py
@@ -0,0 +1,234 @@
+import sublime, sublime_plugin
+import http.client
+import threading
+from .cacher import Cacher
+from .outputpanel import get_number_of_lines, SharedOutputPanelListener
+import json
+import logging
+
+
+class OpenAIWorker(threading.Thread):
+ message = {}
+
+ def __init__(self, region, text, view, mode, command):
+ self.region = region
+ self.text = text
+ self.view = view
+ self.mode = mode
+ self.command = command # optional
+ self.message = {"role": "user", "content": self.command, 'name': 'OpenAI_completion'}
+ settings = sublime.load_settings("openAI.sublime-settings")
+ self.settings = settings
+ self.proxy = settings.get('proxy')['address']
+ self.port = settings.get('proxy')['port']
+ super(OpenAIWorker, self).__init__()
+
+ def prompt_completion(self, completion):
+ completion = completion.replace("$", "\$")
+ if self.mode == 'insertion':
+ result = self.view.find(self.settings.get('placeholder'), 0, 1)
+ if result:
+ self.view.sel().clear()
+ self.view.sel().add(result)
+ # Replace the placeholder with the specified replacement text
+ self.view.run_command("insert_snippet", {"contents": completion})
+ return
+
+ if self.mode == 'chat_completion':
+ from .outputpanel import SharedOutputPanelListener
+ window = sublime.active_window()
+ ## FIXME: This setting applies only in one way none -> markdown
+ listner = SharedOutputPanelListener()
+ listner.refresh_output_panel(
+ window=window,
+ markdown=self.settings.get('markdown'),
+ syntax_path="Packages/Markdown/MultiMarkdown.sublime-syntax"
+ )
+ listner.show_panel(window=window)
+
+ if self.mode == 'completion':
+ region = self.view.sel()[0]
+ if region.a <= region.b:
+ region.a = region.b
+ else:
+ region.b = region.a
+
+ self.view.sel().clear()
+ self.view.sel().add(region)
+ # Replace the placeholder with the specified replacement text
+ self.view.run_command("insert_snippet", {"contents": completion})
+ return
+
+ if self.mode == 'edition': # it's just replacing all given text for now.
+ region = self.view.sel()[0]
+ self.view.run_command("insert_snippet", {"contents": completion})
+ return
+
+ def exec_net_request(self, connect: http.client.HTTPSConnection):
+ # TODO: Add status bar "loading" status, to make it obvious, that we're waiting the server response.
+ try:
+ res = connect.getresponse()
+ data = res.read()
+ status = res.status
+ data_decoded = data.decode('utf-8')
+ connect.close()
+ response = json.loads(data_decoded)
+
+ if self.mode == 'chat_completion':
+ Cacher().append_to_cache([response['choices'][0]['message']])
+ completion = ""
+ print(f"token number: {response['usage']['total_tokens']}")
+ else:
+ completion = json.loads(data_decoded)['choices'][0]['text']
+
+ completion = completion.strip() # Remove leading and trailing spaces
+ self.prompt_completion(completion)
+ except KeyError:
+ # TODO: Add status bar user notification for this action.
+ if self.mode == 'chat_completion' and response['error']['code'] == 'context_length_exceeded':
+ Cacher().drop_first(4)
+ self.chat_complete()
+ else:
+ sublime.error_message("Exception\n" + "The OpenAI response could not be decoded. There could be a problem on their side. Please look in the console for additional error info.")
+ logging.exception("Exception: " + str(data_decoded))
+ return
+ except Exception as ex:
+ sublime.error_message(f"Server Error: {str(status)}\n{ex}")
+ logging.exception("Exception: " + str(data_decoded))
+ return
+
+ def create_connection(self) -> http.client.HTTPSConnection:
+ if len(self.proxy) > 0:
+ connection = http.client.HTTPSConnection(host=self.proxy, port=self.port)
+ connection.set_tunnel("api.openai.com")
+ return connection
+ else:
+ return http.client.HTTPSConnection("api.openai.com")
+
+ def chat_complete(self):
+ cacher = Cacher()
+
+ conn = self.create_connection()
+
+ payload = {
+ # Todo add uniq name for each output panel (e.g. each window)
+ "messages": [
+ {"role": "system", "content": "You are a code assistant."},
+ *cacher.read_all()
+ ],
+ "model": self.settings.get('chat_model'),
+ "temperature": self.settings.get("temperature"),
+ "max_tokens": self.settings.get("max_tokens"),
+ "top_p": self.settings.get("top_p"),
+ }
+
+ json_payload = json.dumps(payload)
+ token = self.settings.get('token')
+
+ headers = {
+ 'Content-Type': "application/json",
+ 'Authorization': f'Bearer {token}',
+ 'cache-control': "no-cache",
+ }
+ conn.request("POST", "/v1/chat/completions", json_payload, headers)
+ self.exec_net_request(connect=conn)
+
+ def complete(self):
+ conn = self.create_connection()
+
+ payload = {
+ "prompt": self.text,
+ "model": self.settings.get("model"),
+ "temperature": self.settings.get("temperature"),
+ "max_tokens": self.settings.get("max_tokens"),
+ "top_p": self.settings.get("top_p"),
+ "frequency_penalty": self.settings.get("frequency_penalty"),
+ "presence_penalty": self.settings.get("presence_penalty")
+ }
+ json_payload = json.dumps(payload)
+
+ token = self.settings.get('token')
+
+ headers = {
+ 'Content-Type': "application/json",
+ 'Authorization': 'Bearer {}'.format(token),
+ 'cache-control': "no-cache",
+ }
+ conn.request("POST", "/v1/completions", json_payload, headers)
+ self.exec_net_request(connect=conn)
+
+ def insert(self):
+ conn = self.create_connection()
+ parts = self.text.split(self.settings.get('placeholder'))
+ try:
+ if not len(parts) == 2:
+ raise AssertionError("There is no placeholder '" + self.settings.get('placeholder') + "' within the selected text. There should be exactly one.")
+ except Exception as ex:
+ sublime.error_message("Exception\n" + str(ex))
+ logging.exception("Exception: " + str(ex))
+ return
+
+ payload = {
+ "model": self.settings.get("model"),
+ "prompt": parts[0],
+ "suffix": parts[1],
+ "temperature": self.settings.get("temperature"),
+ "max_tokens": self.settings.get("max_tokens"),
+ "top_p": self.settings.get("top_p"),
+ "frequency_penalty": self.settings.get("frequency_penalty"),
+ "presence_penalty": self.settings.get("presence_penalty")
+ }
+ json_payload = json.dumps(payload)
+
+ token = self.settings.get('token')
+
+ headers = {
+ 'Content-Type': "application/json",
+ 'Authorization': 'Bearer {}'.format(token),
+ 'cache-control': "no-cache",
+ }
+ conn.request("POST", "/v1/completions", json_payload, headers)
+ self.exec_net_request(connect=conn)
+
+ def edit_f(self):
+ conn = self.create_connection()
+ payload = {
+ "model": self.settings.get('edit_model'),
+ "input": self.text,
+ "instruction": self.command,
+ "temperature": self.settings.get("temperature"),
+ "top_p": self.settings.get("top_p"),
+ }
+ json_payload = json.dumps(payload)
+
+ token = self.settings.get('token')
+
+ headers = {
+ 'Content-Type': "application/json",
+ 'Authorization': 'Bearer {}'.format(token),
+ 'cache-control': "no-cache",
+ }
+ conn.request("POST", "/v1/edits", json_payload, headers)
+ self.exec_net_request(connect=conn)
+
+ def run(self):
+ try:
+ # FIXME: It's better to have such check locally, but it's pretty complicated with all those different modes and models
+ # if (self.settings.get("max_tokens") + len(self.text)) > 4000:
+ # raise AssertionError("OpenAI accepts max. 4000 tokens, so the selected text and the max_tokens setting must be lower than 4000.")
+ if not self.settings.has("token"):
+ raise AssertionError("No token provided, you have to set the OpenAI token into the settings to make things work.")
+ token = self.settings.get('token')
+ if len(token) < 10:
+ raise AssertionError("No token provided, you have to set the OpenAI token into the settings to make things work.")
+ except Exception as ex:
+ sublime.error_message("Exception\n" + str(ex))
+ logging.exception("Exception: " + str(ex))
+ return
+
+ if self.mode == 'insertion': self.insert()
+ if self.mode == 'edition': self.edit_f()
+ if self.mode == 'completion': self.complete()
+ if self.mode == 'chat_completion':
+ Cacher().append_to_cache([self.message])
+ self.chat_complete()
diff --git a/outputpanel.py b/outputpanel.py
new file mode 100644
index 0000000..3cc2300
--- /dev/null
+++ b/outputpanel.py
@@ -0,0 +1,47 @@
+import sublime
+import sublime_plugin
+from .cacher import Cacher
+
+class SharedOutputPanelListener(sublime_plugin.EventListener):
+ OUTPUT_PANEL_NAME = "OpenAI Chat"
+
+ def get_output_panel(self, window: sublime.Window):
+ return window.find_output_panel(self.OUTPUT_PANEL_NAME) if window.find_output_panel(self.OUTPUT_PANEL_NAME) != None else window.create_output_panel(self.OUTPUT_PANEL_NAME)
+
+ def refresh_output_panel(self, window, markdown: bool, syntax_path: str):
+ output_panel = self.get_output_panel(window=window)
+ output_panel.set_read_only(False)
+ self.clear_output_panel(window)
+
+ if markdown: output_panel.set_syntax_file(syntax_path)
+
+ for line in Cacher().read_all():
+ if line['role'] == 'user':
+ output_panel.run_command('append', {'characters': f'\n\n## Question\n\n'})
+ elif line['role'] == 'assistant':
+ ## This one left here as there're could be loooong questions.
+ output_panel.run_command('append', {'characters': '\n\n## Answer\n\n'})
+
+ output_panel.run_command('append', {'characters': line['content']})
+
+ output_panel.set_read_only(True)
+ num_lines = get_number_of_lines(output_panel)
+ print(f'num_lines: {num_lines}')
+
+ ## Hardcoded to -10 lines from the end, just completely randrom number.
+ ## TODO: Here's some complex scrolling logic based on the content (## Answer) required.
+ point = output_panel.text_point(num_lines - 10, 0)
+
+ output_panel.show_at_center(point)
+
+ def clear_output_panel(self, window):
+ output_panel = self.get_output_panel(window=window)
+ output_panel.run_command("select_all")
+ output_panel.run_command("right_delete")
+
+ def show_panel(self, window):
+ window.run_command("show_panel", {"panel": f"output.{self.OUTPUT_PANEL_NAME}"})
+
+def get_number_of_lines(view):
+ last_line_num = view.rowcol(view.size())[0] + 1
+ return last_line_num
\ No newline at end of file
diff --git a/static/chatgpt_completion/image1.png b/static/chatgpt_completion/image1.png
new file mode 100644
index 0000000..4d9e88b
Binary files /dev/null and b/static/chatgpt_completion/image1.png differ
diff --git a/static/chatgpt_completion/image2.png b/static/chatgpt_completion/image2.png
new file mode 100644
index 0000000..4350e5f
Binary files /dev/null and b/static/chatgpt_completion/image2.png differ
diff --git a/static/chatgpt_completion/image3.png b/static/chatgpt_completion/image3.png
new file mode 100644
index 0000000..1198f3f
Binary files /dev/null and b/static/chatgpt_completion/image3.png differ
diff --git a/static/image1.png b/static/simple_completion/image1.png
similarity index 100%
rename from static/image1.png
rename to static/simple_completion/image1.png
diff --git a/static/image2.png b/static/simple_completion/image2.png
similarity index 100%
rename from static/image2.png
rename to static/simple_completion/image2.png
diff --git a/static/image3.png b/static/simple_completion/image3.png
similarity index 100%
rename from static/image3.png
rename to static/simple_completion/image3.png
diff --git a/static/image4.png b/static/simple_completion/image4.png
similarity index 100%
rename from static/image4.png
rename to static/simple_completion/image4.png