Skip to content

Commit d271dde

Browse files
committed
🎉 feat: Code Interpreter API and Agents Release (#4860)
* feat: Code Interpreter API & File Search Agent Uploads chore: add back code files wip: first pass, abstract key dialog refactor: influence checkbox on key changes refactor: update localization keys for 'execute code' to 'run code' wip: run code button refactor: add throwError parameter to loadAuthValues and getUserPluginAuthValue functions feat: first pass, API tool calling fix: handle missing toolId in callTool function and return 404 for non-existent tools feat: show code outputs fix: improve error handling in callTool function and log errors fix: handle potential null value for filepath in attachment destructuring fix: normalize language before rendering and prevent null return fix: add loading indicator in RunCode component while executing code feat: add support for conditional code execution in Markdown components feat: attachments refactor: remove bash fix: pass abort signal to graph/run refactor: debounce and rate limit tool call refactor: increase debounce delay for execute function feat: set code output attachments feat: image attachments refactor: apply message context refactor: pass `partIndex` feat: toolCall schema/model/methods feat: block indexing feat: get tool calls chore: imports chore: typing chore: condense type imports feat: get tool calls fix: block indexing chore: typing refactor: update tool calls mapping to support multiple results fix: add unique key to nav link for rendering wip: first pass, tool call results refactor: update query cache from successful tool call mutation style: improve result switcher styling chore: note on using \`.toObject()\` feat: add agent_id field to conversation schema chore: typing refactor: rename agentMap to agentsMap for consistency feat: Agent Name as chat input placeholder chore: bump agents 📦 chore: update @langchain dependencies to latest versions to match agents package 📦 chore: update @librechat/agents dependency to version 1.8.0 fix: Aborting agent stream removes sender; fix(bedrock): completion removes preset name label refactor: remove direct file parameter to use req.file, add `processAgentFileUpload` for image uploads feat: upload menu feat: prime message_file resources feat: implement conversation access validation in chat route refactor: remove file parameter from processFileUpload and use req.file instead feat: add savedMessageIds set to track saved message IDs in BaseClient, to prevent unnecessary double-write to db feat: prevent duplicate message saves by checking savedMessageIds in AgentController refactor: skip legacy RAG API handling for agents feat: add files field to convoSchema refactor: update request type annotations from Express.Request to ServerRequest in file processing functions feat: track conversation files fix: resendFiles, addPreviousAttachments handling feat: add ID validation for session_id and file_id in download route feat: entity_id for code file uploads/downloads fix: code file edge cases feat: delete related tool calls feat: add stream rate handling for LLM configuration feat: enhance system content with attached file information fix: improve error logging in resource priming function * WIP: PoC, sequential agents WIP: PoC Sequential Agents, first pass content data + bump agents package fix: package-lock WIP: PoC, o1 support, refactor bufferString feat: convertJsonSchemaToZod fix: form issues and schema defining erroneous model fix: max length issue on agent form instructions, limit conversation messages to sequential agents feat: add abort signal support to createRun function and AgentClient feat: PoC, hide prior sequential agent steps fix: update parameter naming from config to metadata in event handlers for clarity, add model to usage data refactor: use only last contentData, track model for usage data chore: bump agents package fix: content parts issue refactor: filter contentParts to include tool calls and relevant indices feat: show function calls refactor: filter context messages to exclude tool calls when no tools are available to the agent fix: ensure tool call content is not undefined in formatMessages feat: add agent_id field to conversationPreset schema feat: hide sequential agents feat: increase upload toast duration to 10 seconds * refactor: tool context handling & update Code API Key Dialog feat: toolContextMap chore: skipSpecs -> useSpecs ci: fix handleTools tests feat: API Key Dialog * feat: Agent Permissions Admin Controls feat: replace label with button for prompt permission toggle feat: update agent permissions feat: enable experimental agents and streamline capability configuration feat: implement access control for agents and enhance endpoint menu items feat: add welcome message for agent selection in localization feat: add agents permission to access control and update version to 0.7.57 * fix: update types in useAssistantListMap and useMentions hooks for better null handling * feat: mention agents * fix: agent tool resource race conditions when deleting agent tool resource files * feat: add error handling for code execution with user feedback * refactor: rename AdminControls to AdminSettings for clarity * style: add gap to button in AdminSettings for improved layout * refactor: separate agent query hooks and check access to enable fetching * fix: remove unused provider from agent initialization options, creates issue with custom endpoints * refactor: remove redundant/deprecated modelOptions from AgentClient processes * chore: update @librechat/agents to version 1.8.5 in package.json and package-lock.json * fix: minor styling issues + agent panel uniformity * fix: agent edge cases when set endpoint is no longer defined * refactor: remove unused cleanup function call from AppService * fix: update link in ApiKeyDialog to point to pricing page * fix: improve type handling and layout calculations in SidePanel component * fix: add missing localization string for agent selection in SidePanel * chore: form styling and localizations for upload filesearch/code interpreter * fix: model selection placeholder logic in AgentConfig component * style: agent capabilities * fix: add localization for provider selection and improve dropdown styling in ModelPanel * refactor: use gpt-4o-mini > gpt-3.5-turbo * fix: agents configuration for loadDefaultInterface and update related tests * feat: DALLE Agents support
1 parent feacb52 commit d271dde

File tree

37 files changed

+293
-140
lines changed

37 files changed

+293
-140
lines changed

.env.example

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -177,10 +177,10 @@ OPENAI_API_KEY=user_provided
177177
DEBUG_OPENAI=false
178178

179179
# TITLE_CONVO=false
180-
# OPENAI_TITLE_MODEL=gpt-3.5-turbo
180+
# OPENAI_TITLE_MODEL=gpt-4o-mini
181181

182182
# OPENAI_SUMMARIZE=true
183-
# OPENAI_SUMMARY_MODEL=gpt-3.5-turbo
183+
# OPENAI_SUMMARY_MODEL=gpt-4o-mini
184184

185185
# OPENAI_FORCE_PROMPT=true
186186

api/app/clients/OpenAIClient.js

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -688,7 +688,7 @@ class OpenAIClient extends BaseClient {
688688
}
689689

690690
initializeLLM({
691-
model = 'gpt-3.5-turbo',
691+
model = 'gpt-4o-mini',
692692
modelName,
693693
temperature = 0.2,
694694
presence_penalty = 0,
@@ -793,7 +793,7 @@ class OpenAIClient extends BaseClient {
793793

794794
const { OPENAI_TITLE_MODEL } = process.env ?? {};
795795

796-
let model = this.options.titleModel ?? OPENAI_TITLE_MODEL ?? 'gpt-3.5-turbo';
796+
let model = this.options.titleModel ?? OPENAI_TITLE_MODEL ?? 'gpt-4o-mini';
797797
if (model === Constants.CURRENT_MODEL) {
798798
model = this.modelOptions.model;
799799
}
@@ -982,7 +982,7 @@ ${convo}
982982
let prompt;
983983

984984
// TODO: remove the gpt fallback and make it specific to endpoint
985-
const { OPENAI_SUMMARY_MODEL = 'gpt-3.5-turbo' } = process.env ?? {};
985+
const { OPENAI_SUMMARY_MODEL = 'gpt-4o-mini' } = process.env ?? {};
986986
let model = this.options.summaryModel ?? OPENAI_SUMMARY_MODEL;
987987
if (model === Constants.CURRENT_MODEL) {
988988
model = this.modelOptions.model;

api/app/clients/llm/createLLM.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ const { isEnabled } = require('~/server/utils');
1717
*
1818
* @example
1919
* const llm = createLLM({
20-
* modelOptions: { modelName: 'gpt-3.5-turbo', temperature: 0.2 },
20+
* modelOptions: { modelName: 'gpt-4o-mini', temperature: 0.2 },
2121
* configOptions: { basePath: 'https://example.api/path' },
2222
* callbacks: { onMessage: handleMessage },
2323
* openAIApiKey: 'your-api-key'

api/app/clients/memory/summaryBuffer.demo.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ const { ChatOpenAI } = require('@langchain/openai');
33
const { getBufferString, ConversationSummaryBufferMemory } = require('langchain/memory');
44

55
const chatPromptMemory = new ConversationSummaryBufferMemory({
6-
llm: new ChatOpenAI({ modelName: 'gpt-3.5-turbo', temperature: 0 }),
6+
llm: new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0 }),
77
maxTokenLimit: 10,
88
returnMessages: true,
99
});

api/app/clients/specs/BaseClient.test.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ describe('BaseClient', () => {
6161
const options = {
6262
// debug: true,
6363
modelOptions: {
64-
model: 'gpt-3.5-turbo',
64+
model: 'gpt-4o-mini',
6565
temperature: 0,
6666
},
6767
};

api/app/clients/specs/OpenAIClient.test.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@ describe('OpenAIClient', () => {
221221

222222
it('should set isChatCompletion based on useOpenRouter, reverseProxyUrl, or model', () => {
223223
client.setOptions({ reverseProxyUrl: null });
224-
// true by default since default model will be gpt-3.5-turbo
224+
// true by default since default model will be gpt-4o-mini
225225
expect(client.isChatCompletion).toBe(true);
226226
client.isChatCompletion = undefined;
227227

@@ -230,7 +230,7 @@ describe('OpenAIClient', () => {
230230
expect(client.isChatCompletion).toBe(false);
231231
client.isChatCompletion = undefined;
232232

233-
client.setOptions({ modelOptions: { model: 'gpt-3.5-turbo' }, reverseProxyUrl: null });
233+
client.setOptions({ modelOptions: { model: 'gpt-4o-mini' }, reverseProxyUrl: null });
234234
expect(client.isChatCompletion).toBe(true);
235235
});
236236

api/app/clients/tools/structured/DALLE3.js

Lines changed: 28 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ class DALLE3 extends Tool {
1919

2020
this.userId = fields.userId;
2121
this.fileStrategy = fields.fileStrategy;
22+
/** @type {boolean} */
23+
this.isAgent = fields.isAgent;
2224
if (fields.processFileURL) {
2325
/** @type {processFileURL} Necessary for output to contain all image metadata. */
2426
this.processFileURL = fields.processFileURL.bind(this);
@@ -108,6 +110,19 @@ class DALLE3 extends Tool {
108110
return `![generated image](${imageUrl})`;
109111
}
110112

113+
returnValue(value) {
114+
if (this.isAgent === true && typeof value === 'string') {
115+
return [value, {}];
116+
} else if (this.isAgent === true && typeof value === 'object') {
117+
return [
118+
'DALL-E displayed an image. All generated images are already plainly visible, so don\'t repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.',
119+
value,
120+
];
121+
}
122+
123+
return value;
124+
}
125+
111126
async _call(data) {
112127
const { prompt, quality = 'standard', size = '1024x1024', style = 'vivid' } = data;
113128
if (!prompt) {
@@ -126,18 +141,23 @@ class DALLE3 extends Tool {
126141
});
127142
} catch (error) {
128143
logger.error('[DALL-E-3] Problem generating the image:', error);
129-
return `Something went wrong when trying to generate the image. The DALL-E API may be unavailable:
130-
Error Message: ${error.message}`;
144+
return this
145+
.returnValue(`Something went wrong when trying to generate the image. The DALL-E API may be unavailable:
146+
Error Message: ${error.message}`);
131147
}
132148

133149
if (!resp) {
134-
return 'Something went wrong when trying to generate the image. The DALL-E API may be unavailable';
150+
return this.returnValue(
151+
'Something went wrong when trying to generate the image. The DALL-E API may be unavailable',
152+
);
135153
}
136154

137155
const theImageUrl = resp.data[0].url;
138156

139157
if (!theImageUrl) {
140-
return 'No image URL returned from OpenAI API. There may be a problem with the API or your configuration.';
158+
return this.returnValue(
159+
'No image URL returned from OpenAI API. There may be a problem with the API or your configuration.',
160+
);
141161
}
142162

143163
const imageBasename = getImageBasename(theImageUrl);
@@ -157,11 +177,11 @@ Error Message: ${error.message}`;
157177

158178
try {
159179
const result = await this.processFileURL({
160-
fileStrategy: this.fileStrategy,
161-
userId: this.userId,
162180
URL: theImageUrl,
163-
fileName: imageName,
164181
basePath: 'images',
182+
userId: this.userId,
183+
fileName: imageName,
184+
fileStrategy: this.fileStrategy,
165185
context: FileContext.image_generation,
166186
});
167187

@@ -175,7 +195,7 @@ Error Message: ${error.message}`;
175195
this.result = `Failed to save the image locally. ${error.message}`;
176196
}
177197

178-
return this.result;
198+
return this.returnValue(this.result);
179199
}
180200
}
181201

api/app/clients/tools/util/handleTools.js

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -146,11 +146,12 @@ const loadToolWithAuth = (userId, authFields, ToolConstructor, options = {}) =>
146146
const loadTools = async ({
147147
user,
148148
model,
149-
functions = true,
150-
returnMap = false,
149+
isAgent,
150+
useSpecs,
151151
tools = [],
152152
options = {},
153-
useSpecs,
153+
functions = true,
154+
returnMap = false,
154155
}) => {
155156
const toolConstructors = {
156157
calculator: Calculator,
@@ -183,6 +184,7 @@ const loadTools = async ({
183184
}
184185

185186
const imageGenOptions = {
187+
isAgent,
186188
req: options.req,
187189
fileStrategy: options.fileStrategy,
188190
processFileURL: options.processFileURL,

api/server/controllers/agents/callbacks.js

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
const { Tools, StepTypes } = require('librechat-data-provider');
1+
const { Tools, StepTypes, imageGenTools } = require('librechat-data-provider');
22
const {
33
EnvVar,
44
GraphEvents,
@@ -191,16 +191,41 @@ function createToolEndCallback({ req, res, artifactPromises }) {
191191
return;
192192
}
193193

194+
if (imageGenTools.has(output.name) && output.artifact) {
195+
artifactPromises.push(
196+
(async () => {
197+
const fileMetadata = Object.assign(output.artifact, {
198+
messageId: metadata.run_id,
199+
toolCallId: output.tool_call_id,
200+
conversationId: metadata.thread_id,
201+
});
202+
if (!res.headersSent) {
203+
return fileMetadata;
204+
}
205+
206+
if (!fileMetadata) {
207+
return null;
208+
}
209+
210+
res.write(`event: attachment\ndata: ${JSON.stringify(fileMetadata)}\n\n`);
211+
return fileMetadata;
212+
})().catch((error) => {
213+
logger.error('Error processing code output:', error);
214+
return null;
215+
}),
216+
);
217+
return;
218+
}
219+
194220
if (output.name !== Tools.execute_code) {
195221
return;
196222
}
197223

198-
const { tool_call_id, artifact } = output;
199-
if (!artifact.files) {
224+
if (!output.artifact.files) {
200225
return;
201226
}
202227

203-
for (const file of artifact.files) {
228+
for (const file of output.artifact.files) {
204229
const { id, name } = file;
205230
artifactPromises.push(
206231
(async () => {
@@ -213,10 +238,10 @@ function createToolEndCallback({ req, res, artifactPromises }) {
213238
id,
214239
name,
215240
apiKey: result[EnvVar.CODE_API_KEY],
216-
toolCallId: tool_call_id,
217241
messageId: metadata.run_id,
218-
session_id: artifact.session_id,
242+
toolCallId: output.tool_call_id,
219243
conversationId: metadata.thread_id,
244+
session_id: output.artifact.session_id,
220245
});
221246
if (!res.headersSent) {
222247
return fileMetadata;

api/server/services/AppService.js

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@ const handleRateLimits = require('./Config/handleRateLimits');
99
const { loadDefaultInterface } = require('./start/interface');
1010
const { azureConfigSetup } = require('./start/azureOpenAI');
1111
const { initializeRoles } = require('~/models/Role');
12-
const { cleanup } = require('./cleanup');
1312
const paths = require('~/config/paths');
1413

1514
/**
@@ -19,7 +18,6 @@ const paths = require('~/config/paths');
1918
* @param {Express.Application} app - The Express application object.
2019
*/
2120
const AppService = async (app) => {
22-
cleanup();
2321
await initializeRoles();
2422
/** @type {TCustomConfig}*/
2523
const config = (await loadCustomConfig()) ?? {};

0 commit comments

Comments
 (0)