Skip to content

fix(openai): resolve vLLM compatibility issue with ChatOpenAI (#32252) #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 26, 2025

Conversation

20ns
Copy link
Owner

@20ns 20ns commented Jul 26, 2025

Overview

This PR fixes the vLLM compatibility issue reported in langchain-ai#32252 where users encountered a TypeError: "Received response with null value for 'choices'" when using vLLM with LangChain-OpenAI integration.

Problem Description

The issue occurred because some OpenAI-compatible APIs like vLLM return valid response objects, but the OpenAI client library's model_dump() method sometimes fails to properly serialize the choices field, returning None instead of the actual choices array. This caused the _create_chat_result method in BaseChatOpenAI to raise a TypeError.

Solution

Added a fallback mechanism in the _create_chat_result method that:

  1. Primary Path: First attempts to get choices from response.model_dump() as before
  2. Fallback Path: If model_dump() returns choices: None, attempts to access choices directly from the response object via response.choices
  3. Graceful Conversion: Converts the raw choice objects to the expected dictionary format using model_dump() or __dict__ fallback
  4. Backward Compatibility: Maintains the original error behavior when choices are truly unavailable

Changes Made

Core Fix

  • File: libs/partners/openai/langchain_openai/chat_models/base.py
  • Method: _create_chat_result() (lines ~1208-1230)
  • Change: Added fallback logic to handle cases where model_dump() returns choices: None

Test Coverage

  • File: libs/partners/openai/tests/unit_tests/chat_models/test_vllm_compatibility.py (new)
  • Coverage: 6 comprehensive test cases covering:
    • Main Fix Test: vLLM response where model_dump() fails but response.choices works
    • Backward Compatibility: Cases where choices are truly unavailable still raise errors
    • Working Cases: Normal responses that work through model_dump()
    • Dict Responses: Direct dictionary responses (bypass issue entirely)
    • Edge Cases: Null choices and missing choices key

Testing

# Run the new vLLM compatibility tests
cd libs/partners/openai
uv run --group test pytest tests/unit_tests/chat_models/test_vllm_compatibility.py -v

# All 6 tests pass ✅

Code Quality

  • Linting: All code passes ruff format and ruff check
  • Type Safety: Proper error handling with try-catch blocks
  • Documentation: Comprehensive docstrings and inline comments
  • Error Messages: Maintains original error messages for debugging

Backward Compatibility

  • No Breaking Changes: All existing functionality preserved
  • Error Behavior: Original errors still raised when appropriate
  • API Compatibility: No changes to public interfaces
  • Performance: Minimal overhead (only executes fallback when needed)

Related Issues

Review Checklist

  • Stable Public Interfaces: No breaking changes to exported APIs
  • Type Hints: All functions have complete type annotations
  • Testing: Comprehensive unit tests covering happy path and edge cases
  • Security: No dangerous patterns, proper exception handling
  • Documentation: Google-style docstrings with clear explanations
  • Code Quality: Passes lint and format checks
  • Architecture: Clean, maintainable solution with proper error handling

- Add fallback mechanism in _create_chat_result to handle cases where
  OpenAI client's model_dump() returns choices as None even when the
  original response object contains valid choices data
- This resolves TypeError: 'Received response with null value for choices'
  when using vLLM with LangChain-OpenAI integration
- Add comprehensive test suite to validate the fix and edge cases
- Maintain backward compatibility for cases where choices are truly unavailable
- Fix addresses GitHub issue langchain-ai#32252

The issue occurred because some OpenAI-compatible APIs like vLLM return
valid response objects, but the OpenAI client library's model_dump() method
sometimes fails to properly serialize the choices field, returning None
instead of the actual choices array. This fix attempts to access the choices
directly from the response object when model_dump() fails.
@20ns 20ns merged commit ff2478f into master Jul 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LangChain-OpenAI raises error due to null choices when using vLLM OpenAI-compatible API
1 participant