Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 13 additions & 24 deletions gpt4all-chat/metadata/latestnews.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,15 @@
## Latest News

GPT4All v3.9.0 was released on February 4th. Changes include:

* **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models.
* **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions.
* **Windows ARM Improvements:**
* Graphical artifacts on some SoCs have been fixed.
* A crash when adding a collection of PDFs to LocalDocs has been fixed.
* **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All.
* **New Models:** OLMoE and Granite MoE models are now supported.

GPT4All v3.8.0 was released on January 30th. Changes include:

* **Native DeepSeek-R1-Distill Support:** GPT4All now has robust support for the DeepSeek-R1 family of distillations.
* Several model variants are now available on the downloads page.
* Reasoning (wrapped in "think" tags) is displayed similarly to the Reasoner model.
* The DeepSeek-R1 Qwen pretokenizer is now supported, resolving the loading failure in previous versions.
* The model is now configured with a GPT4All-compatible prompt template by default.
* **Chat Templating Overhaul:** The template parser has been *completely* replaced with one that has much better compatibility with common models.
* **Code Interpreter Fixes:**
* An issue preventing the code interpreter from logging a single string in v3.7.0 has been fixed.
* The UI no longer freezes while the code interpreter is running a computation.
* **Local Server Fixes:**
* An issue preventing the server from using LocalDocs after the first request since v3.5.0 has been fixed.
* System messages are now correctly hidden from the message history.
GPT4All v3.10.0 was released on February 24th. Changes include:

* **Remote Models:**
* The Add Model page now has a dedicated tab for remote model providers.
* Groq, OpenAI, and Mistral remote models are now easier to configure.
* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend.
* **New Model:** The non-MoE Granite model is now supported.
* **Translation Updates:**
* The Italian translation has been updated.
* The Simplified Chinese translation has been significantly improved.
* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved.
* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.
* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed.
5 changes: 5 additions & 0 deletions gpt4all-chat/metadata/release.json
Original file line number Diff line number Diff line change
Expand Up @@ -273,5 +273,10 @@
"version": "3.9.0",
"notes": "* **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models.\n* **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions.\n* **Windows ARM Improvements:**\n * Graphical artifacts on some SoCs have been fixed.\n * A crash when adding a collection of PDFs to LocalDocs has been fixed.\n* **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All.\n* **New Models:** OLMoE and Granite MoE models are now supported.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)"
},
{
"version": "3.10.0",
"notes": "* **Remote Models:**\n * The Add Model page now has a dedicated tab for remote model providers.\n * Groq, OpenAI, and Mistral remote models are now easier to configure.\n* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend.\n* **New Model:** The non-MoE Granite model is now supported.\n* **Translation Updates:**\n * The Italian translation has been updated.\n * The Simplified Chinese translation has been significantly improved.\n* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved.\n* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.\n* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)\n* Lil Bob (`@Junior2Ran`)\n* Riccardo Giovanetti (`@Harvester62`)"
}
]