@@ -196,7 +196,7 @@ await sdb.selectTables(employeesTable);
196
196
197
197
#### ` getTableNames `
198
198
199
- Returns an array of all table names in the database.
199
+ Returns an array of all table names in the database, sorted alphabetically .
200
200
201
201
##### Signature
202
202
@@ -218,7 +218,8 @@ console.log(tableNames); // Output: ["employees", "customers"]
218
218
219
219
#### ` logTableNames `
220
220
221
- Logs the names of all tables in the database to the console.
221
+ Logs the names of all tables in the database to the console, sorted
222
+ alphabetically.
222
223
223
224
##### Signature
224
225
@@ -852,7 +853,7 @@ This method does not support tables containing geometries.
852
853
##### Signature
853
854
854
855
``` typescript
855
- async aiRowByRow (column : string , newColumn : string , prompt : string , options ?: { batchSize?: number ; concurrent ?: number ; cache ?: boolean ; test ?: (dataPoint : unknown ) => any ; retry ?: number ; model ?: string ; apiKey ?: string ; vertex ?: boolean ; project ?: string ; location ?: string ; ollama ?: boolean | Ollama ; verbose ?: boolean ; rateLimitPerMinute ?: number ; clean ?: (response : unknown ) => any ; contextWindow ?: number }): Promise < void > ;
856
+ async aiRowByRow (column : string , newColumn : string , prompt : string , options ?: { batchSize?: number ; concurrent ?: number ; cache ?: boolean ; test ?: (dataPoint : unknown ) => any ; retry ?: number ; model ?: string ; apiKey ?: string ; vertex ?: boolean ; project ?: string ; location ?: string ; ollama ?: boolean | Ollama ; verbose ?: boolean ; rateLimitPerMinute ?: number ; clean ?: (response : string ) => any ; contextWindow ?: number ; thinkingBudget ?: number ; extraInstructions ?: string }): Promise < void > ;
856
857
```
857
858
858
859
##### Parameters
@@ -892,11 +893,17 @@ async aiRowByRow(column: string, newColumn: string, prompt: string, options?: {
892
893
pass it here too.
893
894
- ** ` options.verbose ` ** : - If ` true ` , logs additional debugging information,
894
895
including the full prompt sent to the AI. Defaults to ` false ` .
895
- - ** ` options.clean ` ** : - A function to clean the AI's response before testing,
896
- caching, and storing. Defaults to ` undefined ` .
896
+ - ** ` options.clean ` ** : - A function to clean the AI's response before JSON
897
+ parsing, testing, caching, and storing. Defaults to ` undefined ` .
897
898
- ** ` options.contextWindow ` ** : - An option to specify the context window size
898
899
for Ollama models. By default, Ollama sets this depending on the model, which
899
900
can be lower than the actual maximum context window size of the model.
901
+ - ** ` options.thinkingBudget ` ** : - Sets the reasoning token budget: 0 to disable
902
+ (default, though some models may reason regardless), -1 for a dynamic budget,
903
+ or > 0 for a fixed budget. For Ollama models, any non-zero value simply
904
+ enables reasoning, ignoring the specific budget amount.
905
+ - ** ` options.extraInstructions ` ** : - Additional instructions to append to the
906
+ prompt, providing more context or guidance for the AI.
900
907
901
908
##### Returns
902
909
@@ -1170,7 +1177,7 @@ and time. Remember to add `.journalism-cache` to your `.gitignore`.
1170
1177
##### Signature
1171
1178
1172
1179
``` typescript
1173
- async aiQuery (prompt : string , options ?: { cache?: boolean ; model ?: string ; apiKey ?: string ; vertex ?: boolean ; project ?: string ; location ?: string ; ollama ?: boolean | Ollama ; contextWindow ?: number ; verbose ?: boolean }): Promise < void > ;
1180
+ async aiQuery (prompt : string , options ?: { cache?: boolean ; model ?: string ; apiKey ?: string ; vertex ?: boolean ; project ?: string ; location ?: string ; ollama ?: boolean | Ollama ; contextWindow ?: number ; thinkingBudget ?: number ; verbose ?: boolean }): Promise < void > ;
1174
1181
```
1175
1182
1176
1183
##### Parameters
@@ -1196,6 +1203,10 @@ async aiQuery(prompt: string, options?: { cache?: boolean; model?: string; apiKe
1196
1203
- ** ` options.contextWindow ` ** : - An option to specify the context window size
1197
1204
for Ollama models. By default, Ollama sets this depending on the model, which
1198
1205
can be lower than the actual maximum context window size of the model.
1206
+ - ** ` options.thinkingBudget ` ** : - Sets the reasoning token budget: 0 to disable
1207
+ (default, though some models may reason regardless), -1 for a dynamic budget,
1208
+ or > 0 for a fixed budget. For Ollama models, any non-zero value simply
1209
+ enables reasoning, ignoring the specific budget amount.
1199
1210
- ** ` options.verbose ` ** : - If ` true ` , logs additional debugging information,
1200
1211
including the full prompt sent to the AI. Defaults to ` false ` .
1201
1212
0 commit comments