|
5 | 5 | ## [2025-10-30] |
6 | 6 |
|
7 | 7 | ### llama-index-core [0.14.7] |
| 8 | + |
8 | 9 | - Feat/serpex tool integration ([#20141](https://github.com/run-llama/llama_index/pull/20141)) |
9 | 10 | - Fix outdated error message about setting LLM ([#20157](https://github.com/run-llama/llama_index/pull/20157)) |
10 | 11 | - Fixing some recently failing tests ([#20165](https://github.com/run-llama/llama_index/pull/20165)) |
|
13 | 14 | - fix api docs build ([#20180](https://github.com/run-llama/llama_index/pull/20180)) |
14 | 15 |
|
15 | 16 | ### llama-index-embeddings-voyageai [0.5.0] |
| 17 | + |
16 | 18 | - Updating the VoyageAI integration ([#20073](https://github.com/run-llama/llama_index/pull/20073)) |
17 | 19 |
|
18 | 20 | ### llama-index-llms-anthropic [0.10.0] |
| 21 | + |
19 | 22 | - feat: integrate anthropic with tool call block ([#20100](https://github.com/run-llama/llama_index/pull/20100)) |
20 | 23 |
|
21 | 24 | ### llama-index-llms-bedrock-converse [0.10.7] |
| 25 | + |
22 | 26 | - feat: Add support for Bedrock Guardrails streamProcessingMode ([#20150](https://github.com/run-llama/llama_index/pull/20150)) |
23 | 27 | - bedrock structured output optional force ([#20158](https://github.com/run-llama/llama_index/pull/20158)) |
24 | 28 |
|
25 | 29 | ### llama-index-llms-fireworks [0.4.5] |
| 30 | + |
26 | 31 | - Update FireworksAI models ([#20169](https://github.com/run-llama/llama_index/pull/20169)) |
27 | 32 |
|
28 | 33 | ### llama-index-llms-mistralai [0.9.0] |
| 34 | + |
29 | 35 | - feat: mistralai integration with tool call block ([#20103](https://github.com/run-llama/llama_index/pull/20103)) |
30 | 36 |
|
31 | 37 | ### llama-index-llms-ollama [0.9.0] |
| 38 | + |
32 | 39 | - feat: integrate ollama with tool call block ([#20097](https://github.com/run-llama/llama_index/pull/20097)) |
33 | 40 |
|
34 | 41 | ### llama-index-llms-openai [0.6.6] |
| 42 | + |
35 | 43 | - Allow setting temp of gpt-5-chat ([#20156](https://github.com/run-llama/llama_index/pull/20156)) |
36 | 44 |
|
37 | 45 | ### llama-index-readers-confluence [0.5.0] |
| 46 | + |
38 | 47 | - feat(confluence): make SVG processing optional to fix pycairo install… ([#20115](https://github.com/run-llama/llama_index/pull/20115)) |
39 | 48 |
|
40 | 49 | ### llama-index-readers-github [0.9.0] |
| 50 | + |
41 | 51 | - Add GitHub App authentication support ([#20106](https://github.com/run-llama/llama_index/pull/20106)) |
42 | 52 |
|
43 | 53 | ### llama-index-retrievers-bedrock [0.5.1] |
| 54 | + |
44 | 55 | - Fixing some recently failing tests ([#20165](https://github.com/run-llama/llama_index/pull/20165)) |
45 | 56 |
|
46 | 57 | ### llama-index-tools-serpex [0.1.0] |
| 58 | + |
47 | 59 | - Feat/serpex tool integration ([#20141](https://github.com/run-llama/llama_index/pull/20141)) |
48 | 60 | - add missing toml info ([#20186](https://github.com/run-llama/llama_index/pull/20186)) |
49 | 61 |
|
50 | 62 | ### llama-index-vector-stores-couchbase [0.6.0] |
51 | | -- Add Hyperscale and Composite Vector Indexes support for Couchbase vector-store ([#20170](https://github.com/run-llama/llama_index/pull/20170)) |
| 63 | + |
| 64 | +- Add Hyperscale and Composite Vector Indexes support for Couchbase vector-store ([#20170](https://github.com/run-llama/llama_index/pull/20170)) |
52 | 65 |
|
53 | 66 | ## [2025-10-26] |
54 | 67 |
|
|
0 commit comments