Skip to content

Commit 637d4d6

Browse files
Release notes updated
1 parent 3e4a37c commit 637d4d6

File tree

8 files changed

+29
-56
lines changed

8 files changed

+29
-56
lines changed

README.md

Lines changed: 23 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -5,41 +5,32 @@
55

66
Cursor level of AI assistance for Sublime Text. I mean it.
77

8-
Works with all OpenAI'ish API: [llama.cpp](https://github.com/ggerganov/llama.cpp) server, [ollama](https://ollama.com) or whatever third party LLM hosting.
8+
Works with all OpenAI'ish API: [llama.cpp](https://github.com/ggerganov/llama.cpp) server, [ollama](https://ollama.com) or whatever third party LLM hosting. Claude API support coming soon.
99

10-
![](static/media/ai_chat_right_phantom.png)
10+
> ![NOTE]
11+
> 5.0.0 release is around the corner! Check out these [release notes](https://github.com/yaroslavyaroslav/OpenAI-sublime-text/blob/develop/messages/5.0.0.md) for details.
12+
13+
![](static/media/ai_chat_left_full.png)
1114

1215
## Features
1316

14-
- Code manipulation (append, insert and edit) selected code with OpenAI models.
15-
- **Phantoms** Get non-disruptive inline right in view answers from the model.
1617
- **Chat mode** powered by whatever model you'd like.
17-
- **gpt-o1 support**.
18-
- **[llama.cpp](https://github.com/ggerganov/llama.cpp)**'s server, **[Ollama](https://ollama.com)** and all the rest OpenAI'ish API compatible.
18+
- **gpt-o3-mini** and **gpt-o1** support.
19+
- **[llama.cpp](https://github.com/ggerganov/llama.cpp)**'s server, **[ollama](https://ollama.com)** and all the rest OpenAI'ish API compatible.
1920
- **Dedicated chats histories** and assistant settings for a projects.
2021
- **Ability to send whole files** or their parts as a context expanding.
22+
- **Phantoms** Get non-disruptive inline right in view answers from the model.
2123
- Markdown syntax with code languages syntax highlight (Chat mode only).
22-
- Server Side Streaming (SSE) (i.e. you don't have to wait for ages till GPT-4 print out something).
24+
- Server Side Streaming (SSE) streaming support.
2325
- Status bar various info: model name, mode, sent/received tokens.
2426
- Proxy support.
2527

26-
### ChatGPT completion demo
27-
28-
https://github.com/yaroslavyaroslav/OpenAI-sublime-text/assets/16612247/37b98cc2-e9cd-46a6-ac5d-03845313096b
29-
30-
> video sped up to 1.7x
31-
32-
---
33-
34-
https://github.com/yaroslavyaroslav/OpenAI-sublime-text/assets/16612247/69f609f3-336d-48e8-a574-3cb7fda5822c
35-
36-
> video sped up to 1.7x
37-
3828
## Requirements
3929

4030
- Sublime Text 4
4131
- **llama.cpp**, **ollama** installed _OR_
4232
- Remote llm service provider API key, e.g. [OpenAI](https://platform.openai.com)
33+
- Atropic API key [coming soon].
4334

4435
## Installation
4536

@@ -76,7 +67,7 @@ You can separate a chat history and assistant settings for a given project by ap
7667
{
7768
"settings": {
7869
"ai_assistant": {
79-
"cache_prefix": "your_project_name"
70+
"cache_prefix": "/absolute/path/to/project/"
8071
}
8172
}
8273
}
@@ -90,12 +81,12 @@ You can add a few things to your request:
9081

9182
To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line).
9283

93-
To send the whole file(s) in advance to request you should `super+button1` on them to make all tabs of them to become visible in a **single view group** and then run `[New Message|Chat Model] with Sheets` command as shown on the screen below. Pay attention, that in given example only `README.md` and `4.0.0.md` will be sent to a server, but not a content of the `AI chat`.
84+
To append the whole file(s) to request you should `super+button1` on them to make whole tabs of them to become visible in a **single view group** and then run `OpenAI: Add Sheets to Context` command. Sheets can be deselected with the same command.
9485

95-
![](static/media/file_selection_example.png)
86+
You can check the numbers of added sheets in the status bar and on `"OpenAI: Chat Model Select"` command call in the preview section.
87+
88+
![](static/media/ai_selector_preview.png)
9689

97-
> [!NOTE]
98-
> It's also doesn't matter whether the file persists on a disc or it's just a virtual buffer with a text in it, if they're selected, their content will be send either way.
9990

10091
### Image handling
10192

@@ -112,39 +103,21 @@ It expects an absolute path to image to be selected in a buffer or stored in cli
112103

113104
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
114105

115-
1. You can set `"prompt_mode": "phantom"` for AI assistant in its settings.
116-
2. [optional] Select some text to pass in context in to manipulate with.
117-
3. Hit `OpenAI: New Message` or `OpenAI: Chat Model Select` and ask whatever you'd like in popup input pane.
118-
4. Phantom will appear below the cursor position or the beginning of the selection while the streaming LLM answer occurs.
119-
5. You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
120-
6. You can hit `ctrl+c` to stop prompting same as with in `panel` mode.
121-
122-
![](static/media/phantom_example.png)
123-
106+
1. [optional] Select some text to pass in context in to manipulate with.
107+
2. Pick `Phantom` as an output mode in quick panel `OpenAI: Chat Model Select`.
108+
3. You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
109+
4. You can hit `ctrl+c` to stop prompting same as with in `panel` mode.
124110

125-
> [!IMPORTANT]
126-
> Yet this is a standalone mode, i.e. an existing chat history won't be sent to a server on a run.
127-
128-
> [!NOTE]
129-
> A more detailed manual, including various assistant configuration examples, can be found within the plugin settings.
130-
131-
> [!WARNING]
132-
> The following in buffer commands are deprecated and will be removed in 5.0 release.
133-
> 1. [DEPRECATED] You can pick one of the following modes: `append`, `replace`, `insert`. They're quite self-descriptive. They should be set up in assistant settings to take effect.
134-
> 2. [DEPRECATED] Select some text (they're useless otherwise) to manipulate with and hit `OpenAI: New Message`.
135-
> 4. [DEPRECATED] The plugin will response accordingly with **appending**, **replacing** or **inserting** some text.
111+
![](static/media/phantom_actions.png)
136112

137113
### Other features
138114

139115
### Open Source models support (llama.cpp, ollama)
140116

141-
1. Replace `"url"` setting of a given model to point to whatever host you're server running on (e.g.`"http://localhost:8080"`).
142-
2. ~~[Optional] Provide a `"token"` if your provider required one.~~ **Temporarily mandatory, see warning below.**
117+
1. Replace `"url"` setting of a given model to point to whatever host you're server running on (e.g.`http://localhost:8080/v1/chat/completions`).
118+
2. Provide a `"token"` if your provider required one.
143119
3. Tweak `"chat_model"` to a model of your choice and you're set.
144120

145-
> [!WARNING]
146-
> Due to a known issue, a token value of 10 or more characters is currently required even for unsecured servers. [More details here.](#workaround-for-64)
147-
148121
> [!NOTE]
149122
> You can set both `url` and `token` either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.
150123
@@ -193,7 +166,7 @@ You can setup it up by overriding the proxy property in the `OpenAI completion`
193166
> All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so.
194167
195168
> [!NOTE]
196-
> This one was initially written at 80% by a GPT3.5 back then. I was there mostly for debugging purposes, rather than digging in into ST API. This is a pure magic, I swear!
169+
> Dedicated to GPT3.5 that one the one who initially written at 80% of this back then. This was felt like a pure magic!
197170
198171
[stars]: https://github.com/yaroslavyaroslav/OpenAI-sublime-text/stargazers
199172
[img-stars]: static/media/star-on-github.svg

messages/5.0.0.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,19 +2,19 @@
22

33
## tldr;
44

5-
I got bored and rewrote the whole thing in rust completely. There's not that much brand new features so far, but a lot of them are coming, since the core of the pacakge is now much more reliable and less tangled. Here it is [btw](https://github.com/yaroslavyaroslav/llm_runner).
5+
I got bored and rewrote the whole thing in rust completely. There's not that much brand new features so far, but a lot of them are coming, since the core of the package is now much more reliable and less tangled. [Here it is btw](https://github.com/yaroslavyaroslav/llm_runner).
66

77
## Features
88

99
1. The core of the plugin is implemented in rust, thus it has become a way faster and reliable.
1010
2. Context passing enhancement:
11-
- files/sheets passes as references now, i.e. all the changes made within are preserved in the next llm request
11+
- files/sheets passes as references now, i.e. all the changes made within are preserved in the next llm request.
1212
- they're togglable now, i.e. you pick those that you want to include, call a command and then is passes all the time along the session until you toggle them back off.
13-
- built in output panels contnet passing, e.g. build systems and lsp diagnostic outputs can be passed with a command.
14-
3. Model picker command now supports nested list flow (i.e. ListInputHandler), thus you can switch between view modes and the models on the fly. `"prompt_mode"` in model settings is ignored and can be deleted.
15-
4. AssistantSettings now provides `"api_type"`, where the options is `"plain_text"`, `"open_ai"` and `"antropic"` (not implemented). This is the ground work already done to provide claude and all the rest of the custom services support in thr nearest future. Please take a look at the asssitant settings part if you're curious about the details.
13+
- built in output panels content passing, e.g. build systems and lsp diagnostic outputs can be passed with a command.
14+
3. Model picker command now supports nested list flow (i.e. `ListInputHandler`), thus you can switch between view modes and the models on the fly. `"prompt_mode"` in model setting is ignored and can be deleted.
15+
4. `AssistantSettings` now provides `"api_type"`, where the options is `"plain_text"`[default], `"open_ai"` and `"antropic"`[not implemented]. This is the ground work already done to provide Claude and all the rest of the custom services support in the nearest future. Please take a look at the assistant settings part if you're curious about the details.
1616
5. Chat history and picked model now can be stored in arbitrary folder.
17-
6. Functions support[not implemented yet], there're few built in functions provided to allow model to manage the code.
17+
6. Functions support, there are few built in functions provided to allow model to manage the code [replace_text_with_another_text, replace_text_for_whole_file, read_region_content, get_working_directory_content].
1818

1919
## Installation
2020

static/media/ai_chat_left_full.png

511 KB
Loading

static/media/ai_selector_preview.png

49.6 KB
Loading

static/media/editing_thumbnail.png

-333 KB
Binary file not shown.
-539 KB
Binary file not shown.

static/media/panel_thumbnail.png

-967 KB
Binary file not shown.

static/media/phantom_actions.png

23 KB
Loading

0 commit comments

Comments
 (0)