Skip to content

Commit 1fb03f9

Browse files
committed
v5.6.24
1 parent ae334b8 commit 1fb03f9

File tree

3 files changed

+19
-5
lines changed

3 files changed

+19
-5
lines changed

deno.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "@nshiab/simple-data-analysis",
3-
"version": "5.6.23",
3+
"version": "5.6.24",
44
"exports": {
55
".": "./src/index.ts"
66
},

deno.lock

Lines changed: 2 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

llm.md

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -852,7 +852,7 @@ This method does not support tables containing geometries.
852852
##### Signature
853853

854854
```typescript
855-
async aiRowByRow(column: string, newColumn: string, prompt: string, options?: { batchSize?: number; concurrent?: number; cache?: boolean; test?: (dataPoint: unknown) => any; retry?: number; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; verbose?: boolean; rateLimitPerMinute?: number; clean?: (response: unknown) => any }): Promise<void>;
855+
async aiRowByRow(column: string, newColumn: string, prompt: string, options?: { batchSize?: number; concurrent?: number; cache?: boolean; test?: (dataPoint: unknown) => any; retry?: number; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; verbose?: boolean; rateLimitPerMinute?: number; clean?: (response: unknown) => any; contextWindow?: number }): Promise<void>;
856856
```
857857

858858
##### Parameters
@@ -894,6 +894,9 @@ async aiRowByRow(column: string, newColumn: string, prompt: string, options?: {
894894
including the full prompt sent to the AI. Defaults to `false`.
895895
- **`options.clean`**: - A function to clean the AI's response before testing,
896896
caching, and storing. Defaults to `undefined`.
897+
- **`options.contextWindow`**: - An option to specify the context window size
898+
for Ollama models. By default, Ollama sets this depending on the model, which
899+
can be lower than the actual maximum context window size of the model.
897900

898901
##### Returns
899902

@@ -971,7 +974,7 @@ This method does not support tables containing geometries.
971974
##### Signature
972975

973976
```typescript
974-
async aiEmbeddings(column: string, newColumn: string, options?: { createIndex?: boolean; concurrent?: number; cache?: boolean; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; verbose?: boolean; rateLimitPerMinute?: number }): Promise<void>;
977+
async aiEmbeddings(column: string, newColumn: string, options?: { createIndex?: boolean; concurrent?: number; cache?: boolean; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; verbose?: boolean; rateLimitPerMinute?: number; contextWindow?: number }): Promise<void>;
975978
```
976979

977980
##### Parameters
@@ -1005,6 +1008,9 @@ async aiEmbeddings(column: string, newColumn: string, options?: { createIndex?:
10051008
- **`options.ollama`**: - If `true`, uses Ollama. Defaults to the `OLLAMA`
10061009
environment variable. If you want your Ollama instance to be used, you can
10071010
pass it here too.
1011+
- **`options.contextWindow`**: - An option to specify the context window size
1012+
for Ollama models. By default, Ollama sets this depending on the model, which
1013+
can be lower than the actual maximum context window size of the model.
10081014
- **`options.verbose`**: - If `true`, logs additional debugging information.
10091015
Defaults to `false`.
10101016

@@ -1063,7 +1069,7 @@ up processing. If the index already exists, it will not be recreated.
10631069
##### Signature
10641070

10651071
```typescript
1066-
async aiVectorSimilarity(text: string, column: string, nbResults: number, options?: { createIndex?: boolean; outputTable?: string; cache?: boolean; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; verbose?: boolean }): Promise<SimpleTable>;
1072+
async aiVectorSimilarity(text: string, column: string, nbResults: number, options?: { createIndex?: boolean; outputTable?: string; cache?: boolean; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; contextWindow?: number; verbose?: boolean }): Promise<SimpleTable>;
10671073
```
10681074

10691075
##### Parameters
@@ -1097,6 +1103,9 @@ async aiVectorSimilarity(text: string, column: string, nbResults: number, option
10971103
pass it here too.
10981104
- **`options.verbose`**: - If `true`, logs additional debugging information.
10991105
Defaults to `false`.
1106+
- **`options.contextWindow`**: - An option to specify the context window size
1107+
for Ollama models. By default, Ollama sets this depending on the model, which
1108+
can be lower than the actual maximum context window size of the model.
11001109

11011110
##### Returns
11021111

@@ -1161,7 +1170,7 @@ and time. Remember to add `.journalism-cache` to your `.gitignore`.
11611170
##### Signature
11621171

11631172
```typescript
1164-
async aiQuery(prompt: string, options?: { cache?: boolean; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; verbose?: boolean }): Promise<void>;
1173+
async aiQuery(prompt: string, options?: { cache?: boolean; model?: string; apiKey?: string; vertex?: boolean; project?: string; location?: string; ollama?: boolean | Ollama; contextWindow?: number; verbose?: boolean }): Promise<void>;
11651174
```
11661175

11671176
##### Parameters
@@ -1184,6 +1193,9 @@ async aiQuery(prompt: string, options?: { cache?: boolean; model?: string; apiKe
11841193
- **`options.ollama`**: - If `true`, uses Ollama. Defaults to the `OLLAMA`
11851194
environment variable. If you want your Ollama instance to be used, you can
11861195
pass it here too.
1196+
- **`options.contextWindow`**: - An option to specify the context window size
1197+
for Ollama models. By default, Ollama sets this depending on the model, which
1198+
can be lower than the actual maximum context window size of the model.
11871199
- **`options.verbose`**: - If `true`, logs additional debugging information,
11881200
including the full prompt sent to the AI. Defaults to `false`.
11891201

0 commit comments

Comments
 (0)