Skip to content

Conversation

christian-bromann
Copy link
Member

@christian-bromann christian-bromann commented Jun 6, 2025

Description

This PR enhances type safety for model IDs in core OpenAI and Anthropic integrations by leveraging official SDK types. This provides better IntelliSense support and compile-time validation while maintaining full backward compatibility.

Changes

Type Definitions Added

  • AnthropicMessagesModelId: Uses Anthropic.Model from the official Anthropic SDK
  • OpenAIChatModelId: Uses OpenAIClient.ChatModel from the official OpenAI SDK
  • OpenAIEmbeddingModelId: Uses OpenAIClient.EmbeddingModel from the official OpenAI SDK
  • OpenAIImageModelId: Uses OpenAIClient.ImageModel from the official OpenAI SDK

Benefits

  1. Enhanced Developer Experience: Full IntelliSense support with autocomplete for all supported models
  2. Type Safety: Compile-time validation prevents typos in model names
  3. Automatic Updates: Model types stay current with official SDK releases
  4. Documentation: All type definitions include links to official documentation
  5. Backward Compatibility: All types include (string & NonNullable<unknown>) for custom models

Implementation Strategy

  • Official SDK First: Leverages types directly from @anthropic-ai/sdk and openai packages
  • Zero Maintenance: Model lists automatically update with SDK updates
  • Flexible: Supports custom model names via string union types
  • Non-Breaking: Existing code continues to work without changes

Backward Compatibility

No breaking changes - all existing code continues to work unchanged
Custom model support - string union allows any custom model name
Deprecation handled - maintains support for deprecated modelName parameters

Testing

  • All existing tests pass
  • Type checking validates model ID usage across all affected components
  • IntelliSense provides comprehensive autocomplete for supported models
  • Custom model names work seamlessly alongside typed models

@vercel
Copy link

vercel bot commented Jun 6, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchainjs-docs ✅ Ready (Inspect) Visit Preview Jun 18, 2025 11:16pm
1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchainjs-api-refs ⬜️ Ignored (Inspect) Jun 18, 2025 11:16pm

Copy link
Member

@hntrl hntrl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @christian-bromann!

I'm not too sure if we want to inline all available models in a union type that we maintain. What i'm thinking is it could lead to some false negatives where someone assumes that we don't support a model when a new one releases because it isn't typed (even though we are just passing the request onto the first party sdk).

I'd redirect you to try and see if the provider SDKs we use export a string union with all of their model types and use that instead. That way whenever the provider SDK upgrades, our types also upgrade. (it could be an 'ugly' type that just picks it off invoke params from the SDK)

@christian-bromann
Copy link
Member Author

christian-bromann commented Jun 17, 2025

@hntrl good call! I was able to find proper types within the OpenAI and Anthropic SDKs. Shall I revert changes for the other packages? This could indeed be maintenance nightmare.

An alternative would be to use the Open Router Models API (ref) to generate types from that response into an extra package we consume everywhere. I don't think the added complexity justifies the dev x gain here though.

Thoughts?

@hntrl
Copy link
Member

hntrl commented Jun 17, 2025

  1. Unless we use an sdk for other packages that has a model type, then go ahead and revert them.
  2. On the open router API, yeah probably not.

Add comprehensive type definitions for model IDs across multiple providers:

Chat models: Add DeepInfraChatModelId, CerebrasChatModelId, FireworksChatModelId, GroqChatModelId, TogetherAIChatModelId, PerplexityLanguageModelId, MistralChatModelId, OpenAIChatModelId, GoogleGenerativeAIModelId, BedrockChatModelId, ZhipuAIModelId, and CohereChatModelId
Embedding models: Add FireworksEmbeddingModelId, TogetherAIEmbeddingModelId, JinaEmbeddingsModelId, PremEmbeddingsModelId, OpenAIEmbeddingModelId, GoogleVertexEmbeddingModelId, WatsonxEmbeddingModelId, and CohereEmbeddingModelId

Type safety improvements: Replace generic string types with specific union types for model parameters across all affected classes and interfaces
Documentation links: Include provider documentation URLs in type definitions for easy reference to available models
Backward compatibility: All model ID types include (string & NonNullable<unknown>) to maintain compatibility with custom model names
This change enhances developer experience by providing IntelliSense support and compile-time validation for model selection while maintaining backward compatibility with existing code.
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:XL This PR changes 500-999 lines, ignoring generated files. labels Jun 18, 2025
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Jun 18, 2025
@christian-bromann
Copy link
Member Author

  1. then go ahead and revert them.

Done.

@dosubot dosubot bot added the lgtm PRs that are ready to be merged as-is label Jun 18, 2025
@hntrl hntrl merged commit bab0cf5 into langchain-ai:main Jun 19, 2025
38 checks passed
@christian-bromann christian-bromann deleted the cb/add-typed-model-ids branch June 19, 2025 03:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

auto:improvement Medium size change to existing code to handle new use-cases lgtm PRs that are ready to be merged as-is size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants