Skip to content

Conversation

@github-actions
Copy link

@github-actions github-actions bot commented Aug 6, 2025

Auto-generated migration based on schema.prisma changes.

Generated files:

  • deploy/migrations/${VERSION}_schema_update/migration.sql
  • deploy/migrations/${VERSION}_schema_update/README.md

jatorre and others added 30 commits July 16, 2025 13:40
 Redis Session Patch (COMPLETE)

  - Problem: Conversation context lost due to 10-second batch processing delay
  - Solution: Redis-based immediate session storage with graceful fallback
  - Status: Production-ready with comprehensive testing

Ad discussed in BerriAI#12364
  Fixes streaming ID inconsistency where streaming responses used raw provider IDs
  while non-streaming responses used properly encoded IDs with provider context.

  Changes:
  - Updated LiteLLMCompletionStreamingIterator to accept provider context
  - Added _encode_chunk_id() method using same logic as non-streaming responses
  - Modified chunk transformation to encode all streaming item_ids with resp_ prefix
  - Updated handlers to pass custom_llm_provider and litellm_metadata to streaming iterator

  Impact:
  - Streaming chunk IDs now format: resp_<base64_encoded_provider_context>
  - Enables session continuity when using streaming response IDs as previous_response_id
  - Allows provider detection and load balancing with streaming responses
  - Maintains backward compatibility with existing streaming functionality

  🤖 Generated with [Claude Code](https://claude.ai/code)
…rmat

Fixes streaming ID inconsistency where streaming responses used raw provider IDs
while non-streaming responses used properly encoded IDs with provider context.

Changes:
- Updated LiteLLMCompletionStreamingIterator to accept provider context
- Added _encode_chunk_id() method using same logic as non-streaming responses
- Modified chunk transformation to encode all streaming item_ids with resp_ prefix
- Updated handlers to pass custom_llm_provider and litellm_metadata to streaming iterator

Impact:
- Streaming chunk IDs now format: resp_<base64_encoded_provider_context>
- Enables session continuity when using streaming response IDs as previous_response_id
- Allows provider detection and load balancing with streaming responses
- Maintains backward compatibility with existing streaming functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
This resolves MyPy type checking error where model_id could be None
but wasn't explicitly typed as Optional[str].
Prevents 'Item None has no attribute get' error by checking for None
before accessing litellm_metadata dictionary.
Adds unit and E2E tests to verify streaming chunk IDs are properly encoded
with consistent format across streaming responses.

## Tests Added

### Unit Test (test_reasoning_content_transformation.py)
- `test_streaming_chunk_id_encoding()`: Validates the `_encode_chunk_id()` method
  correctly encodes chunk IDs with `resp_` prefix and provider context

### E2E Tests (test_e2e_openai_responses_api.py)
- `test_streaming_id_consistency_across_chunks()`: Tests that all streaming chunk IDs
  are properly encoded across multiple chunks in a real streaming response
- `test_streaming_response_id_as_previous_response_id()`: Tests the core use case -
  using streaming response IDs for session continuity with `previous_response_id`

## Key Testing Approach
- Uses **Gemini** (non-OpenAI model) to test the transformation logic rather than
  OpenAI passthrough, since the streaming ID consistency issue occurs when LiteLLM
  transforms responses rather than just passing through to native OpenAI responses API
- Tests validate that streaming chunk IDs now use same encoding as non-streaming responses
- Verifies session continuity works with streaming responses

Addresses @ishaan-jaff's request for unit tests covering the streaming ID consistency fix.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
Removes unused imports to fix CI linting errors:
- GenericResponseOutputItem
- OutputFunctionToolCall
Remove streaming ID consistency E2E tests as requested by @ishaan-jaff.
Keep only the mock/unit test in test_reasoning_content_transformation.py
This reverts the streaming chunk ID encoding changes to understand the original issue better.
Original behavior was:
- Streaming chunks: raw provider IDs
- Streaming final response: raw IDs (PROBLEM!)
- Non-streaming final response: encoded IDs (correct)

The real issue: streaming final response IDs were not encoded, breaking session continuity.
…ehavior

Fixes streaming ID inconsistency to match OpenAI's Responses API behavior:
- Streaming chunks: raw message IDs (like OpenAI's msg_xxx)
- Final response: encoded IDs (like OpenAI's resp_xxx)

This enables session continuity by ensuring streaming final response IDs
have the same encoded format as non-streaming responses, allowing them
to be used as previous_response_id in follow-up requests.

Changes:
- Add custom_llm_provider and litellm_metadata to LiteLLMCompletionStreamingIterator
- Update handlers to pass provider context to streaming iterator
- Apply _update_responses_api_response_id_with_model_id to final streaming response
- Keep streaming chunks as raw IDs to match OpenAI format

Impact:
- Session continuity works with streaming responses
- Load balancing can detect provider from streaming final response IDs
- Format matches OpenAI's Responses API exactly

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
Updates the unit test to verify streaming chunk IDs are raw (not encoded)
to match OpenAI's responses API format:
- Streaming chunks: raw message IDs (like msg_xxx)
- Final response: encoded IDs (like resp_xxx)

This reflects the correct behavior implemented in the fix.
- Add test_responses_api.py for testing multiple providers
- Add responses_api_config.yaml with Claude, DeepSeek, and Gemini
- Add RESPONSES_API_TEST_README.md with setup instructions
- Tests session management with Redis for context retention
- Validates basic responses, streaming, and session linking
The Response API wasn't storing sessions in Redis for streaming requests,
only for non-streaming. This caused context to be lost when using
previous_response_id with streaming responses.

Changes:
- Add _store_session_in_redis method to streaming iterator
- Store full conversation history immediately when stream completes
- Pass litellm_completion_request to streaming iterator for message history
- Ensures streaming behaves identically to non-streaming for session storage

This fixes the timing issue where a delay was needed between requests
to allow batch processing to store sessions.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
…y' into chore/merge-streaming-id-consistency
mateo-di and others added 16 commits August 8, 2025 10:33
Port streaming ID consistency fixes from `jatorre/feature/streaming-id-consistency` to `main`
…nsistency' into chore/merge-streaming-id-consistency"

This reverts commit ce3b79c, reversing
changes made to b7db96f.
…ssion-timing

Jatorre/fix/responses api redis session timing
Configure scheduler with memory leak prevention settings
fix mcp_table server_name error & remove arm64 platform from dockerfile
Automatic sync from upstream BerriAI/litellm
Preparing for v1.78.5-stable release

Strategy: Accept all upstream changes (main is a mirror)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants