Skip to content

Conversation

@galshubeli
Copy link
Contributor

@galshubeli galshubeli commented Jun 23, 2025

Summary by CodeRabbit

  • Refactor

    • Unified model integrations (OpenAI, Azure, Gemini, Ollama) to use a streamlined LiteLLM backend, simplifying chat sessions and message handling.
    • Removed the concept of output methods, resulting in a cleaner interface for sending messages to models.
    • Model configuration now supports flexible additional parameters and improved JSON serialization.
    • Enhanced model configuration for structured data extraction with explicit JSON response formatting and improved temperature defaults.
    • Updated example notebooks and documentation to use new default models and reflect the LiteLLM integration.
    • Simplified chat session implementations by delegating all core logic and state management to LiteLLM classes.
    • Removed direct client initializations and manual API handling for all supported LLM providers.
    • Refined ontology and knowledge graph schemas in examples for richer domain modeling and clearer attribute definitions.
    • Streamlined test imports and model instantiations to align with the new LiteLLM-based model usage.
    • Adjusted CI workflows to focus on updated model configurations and environment variables.
    • Made litellm and ollama mandatory dependencies, removing optional extras for LLM providers.
  • Bug Fixes

    • Improved handling of model configuration for extracting structured data.
  • Tests

    • Updated test cases to use new model names and adapt to the revised model integration approach.
  • Chores

    • Added environment variables for Azure API credentials to the test workflow for improved CI support.

@galshubeli galshubeli self-assigned this Jun 23, 2025
@coderabbitai
Copy link

coderabbitai bot commented Jun 23, 2025

## Walkthrough

This update refactors all generative model classes (OpenAI, Azure OpenAI, Gemini, Ollama) to delegate functionality to a unified LiteLLM backend, removing direct SDK dependencies and manual chat management. The `OutputMethod` enum and related logic are eliminated, replaced by flexible parameter passing. Test cases are updated for model naming consistency. Additionally, `KnowledgeGraphModelConfig.with_model` now creates a specialized `extract_data` model with adjusted generation configuration.

## Changes

| File(s)                                                                 | Change Summary                                                                                                           |
|-------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
| graphrag_sdk/models/litellm.py                                          | LiteModel and LiteModelChatSession refactored to accept `additional_params`, remove `output_method`, and simplify config.|
| graphrag_sdk/models/model.py                                            | Removed `OutputMethod` enum; GenerativeModelConfig now supports arbitrary params; updated serialization/deserialization. |
| graphrag_sdk/models/openai.py<br>graphrag_sdk/models/azure_openai.py<br>graphrag_sdk/models/gemini.py<br>graphrag_sdk/models/ollama.py | All model classes now delegate to LiteModel; removed direct SDK usage, manual chat history, and `output_method` logic.   |
| graphrag_sdk/model_config.py                                            | `KnowledgeGraphModelConfig.with_model` now creates a specialized `extract_data` model with adjusted config.              |
| graphrag_sdk/steps/extract_data_step.py                                 | Removed all `OutputMethod` usage from method signatures and calls.                                                       |
| graphrag_sdk/orchestrator/orchestrator.py                               | Removed unused import of `OutputMethod`.                                                                                 |
| tests/test_auto_create_ontology.py<br>tests/test_kg.py<br>tests/test_ontology_from_kg.py | Updated Gemini model instantiations to use `"gemini-1.5-flash"` instead of `"gemini-1.5-flash-001"`.                    |
| tests/test_multi_agent.py                                               | Switched from GeminiGenerativeModel to LiteModel with `"azure/o3-mini"` for test setup.                                  |

## Sequence Diagram(s)

```mermaid
sequenceDiagram
    participant User
    participant GenerativeModel (OpenAI/Gemini/Ollama/Azure)
    participant LiteModel
    participant LiteModelChatSession

    User->>GenerativeModel: start_chat(system_instruction)
    GenerativeModel->>LiteModel: start_chat(system_instruction)
    LiteModel->>LiteModelChatSession: create session

    User->>GenerativeModel: send_message(message)
    GenerativeModel->>LiteModel: delegate to chat session
    LiteModel->>LiteModelChatSession: send_message(message)
    LiteModelChatSession-->>LiteModel: return GenerationResponse
    LiteModel-->>GenerativeModel: return GenerationResponse
    GenerativeModel-->>User: return GenerationResponse

Suggested reviewers

  • swilly22
  • gkorland

Poem

In the warren of code, we hopped with delight,
Refactoring models to keep things light.
LiteLLM now leads the way,
No more OutputMethod in our play!
With every test and every chat,
The code runs smoother—imagine that!
🐇✨


<!-- walkthrough_end -->


---

<details>
<summary>📜 Recent review details</summary>

**Configuration used: CodeRabbit UI**
**Review profile: CHILL**
**Plan: Pro**


<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between a31dcaa19354533f280d6c9e9ca09f04530f7410 and 8b7c17b80be108bcd4751b64c63b338c47a88b4a.

</details>

<details>
<summary>⛔ Files ignored due to path filters (1)</summary>

* `poetry.lock` is excluded by `!**/*.lock`

</details>

<details>
<summary>📒 Files selected for processing (2)</summary>

* `.github/workflows/test.yml` (2 hunks)
* `pyproject.toml` (2 hunks)

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (2)</summary>

* .github/workflows/test.yml
* pyproject.toml

</details>

</details>
<!-- internal state start -->


<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEejqANiS4AxeAA9IAGScBZBRgBm8Itirj8LGxENFJ0DHoABQp8CXglCmQAQUiASQMAOWxmAUouAEZ8gDYDAFVEPMgiNAtEWGxci3gjfWNwKDJ6fE8cAmIyZRp6JlZ2Ll5+YVFxKRl5JgSVNU1tXTBDEyg4VFRMXsJScn86BVGMTkgqAHdIRGzmNAp5OQVF1XUtHVa20wMiKm4sCoRAA+ohaABrAD0zEUJAsIKYXh8Gm4sg4BgARNiDABiXGQJKpfpHagnO6sR7ybqMWCYUiIMywEgeRC4CjYMT4Pg0gAGAGkMPgrlZaKQAOIA2CuOEWADCgW8RF5kCuaGQsNo8G8JwIkBI9m4TQYlnkFVwkFwzMgvIo0m4gQqIM83IeuBVOosXR6VpZvIN7LQYhBtGoaBVmvhAHJkLzDoN4IFEYqfCq9byAN6Y3CybgkTFcTFCRBJwQiMSYgC+afw+owdztt3ZnNwfhOodwaH19kDYkTGA0MGtbOo8AYkDYVsUNqu6lgIMjFhVauQ2G4Hd1tYYdrJ6EgShI3AUaP4Put8Aw3DwNsXHu5lut/p7VGDHfDl3wVgANLcSLhxBgRAoLgsbxv4/bJsiRAaHaiAOvWJDOq61A1jaWY5nmBaQEWJYYCCZbTFWKq5C6jbcOqiAXkB6iWrWvqss2XIUIOcB+gwubMhQIJgaOgQqpg9C8gAju+i7IHaDwXugiCUUQ5D0FqdpiBY8ieDEzAPiyF5Xhai6qnO+DXpq2pjrxA5DiyDB0oBWnnDEtCctIe4HkeTAnnqaASPg8QTngo6AZp/AUD4F41BOsqqsyWBIkqfhUTaAYvrgIZhryGhGEY+KEhYNDgY6tGBUoDAWI8ZnIDSBoOhQQxBTwDTGnW4jiNILSQAq9aMQQfC8iCIIXuovUABQVBYng/olQbJW+XDigM4FSDKSgWD+7GApQ3FzWZM2bTMJCLfCP4idtpK7ftFgAJQqlJxUUTagrCqKEpSmd7VKldWBxlKwJgpCMKypBSqorIy7quFWo6vQermglz6TSlnYaDxAR4TFKKwfBTqkW6qGZtmub5oWxallMFbVulUAAMqduI46TrA068rOVoLrKg2LsdCYLbK526HokD3SKdBPWggIvSmyooNFJUyXdQqC2KJCSiL0qyq9qaSza/zK994LQouAMomiIMaooJmbowO40M5JCHse1I9Lyt6QKRMO9lNqU/tD6igTtEGo9B6OOohWMoQVuMYQT2FE3hBGk7yP4CTwt1WqgxkQ3bBX0UibIckxktsiQaDeoF3IhRgYWLul2KYi0vxa4COu/WJUJoAAXm2+F5uX8BAxi1d4gSRIkoM5L3FSp60vSLUGKxNpJO3doAPJd0k8CzSdkh7bKKo3TLK6XCQniTdy5sXpxNFqSwNpOOoW9Le9BdFxPvLr5zd/wmlkCpCBDG511kBCkgBYQIpA+A7loMkBelk7RKHOPAGo5U+D9XEDUeArcnJ7F5PPNsy8yBEh3k0dgABufONAi4/hokiKQ1VkD0SIJvLAely5sEgC8Xgh8HDxRokzWANpMRtzbFCTE/EIhJxkk5FOtC6KPhvjQM6O9HSdXvDUEB+krRVF9oEDwSpPayALhpC8OcWz9gTqIouWpkZhXIlQSclBEAsWtKXeh5cLBg3hAAtALDBo8MMhaei7DvD2B5qgNkJ96C8EkGSFS4R6CVXwBUegcQuxdl4PgPM1VZAOJZOwuIhlEDRPgMwI0JA2DnBOLydUEIIx/npvQROgcEJJ0SPFYB9DxxXEoCyCSsQTjsHgHaFS6UZ6PgICCaOIjBKX2YGM3C1SpwQNVJ0yAa4NyQ1rBUYKqD0ExP3NISg8CmjbOCPFeiTjQquKYZ4lkPjrz0V5PwqBQiVQBIcEM2eWCoG4IwKvOUdJcAU2kJRPijBpbIH3lZKeazJbnwtFM6+t8Xp/IBTJfsD8yHFxfr7LmS1fnUGRUCjAn9v7IGzkovgctbiFKNPMGotRArWPYL/Fs3IfzdLiAFWlE5MDYDChCi0CBQlPC5eXUgpSLSJzYDJUILIKgRBafgNpg4kgWFcXy38KKtF00UMgQavJZW0AXIC6V8cbT6sNVK0gYJ2SF2YCauMf5ER/JBAKrqwMfy8iWn+RC0tkqSpCKQXkPNHgsgpWffZNU4W8lke/eUSLAWorUfTa8PSKDBSUEMxe1DaU/iznCW4CBPA/zhYpaYhIoGQC+USQkaRln+pZBK7lYURxWweCKkpjKPKqgBOk9AMRsCiOjS4VwMZ4VyO3jsqNCLVZxo1YSn8nrqgASApypgjZWljh2c2lkrbpVioKoOtwkV4BWB4LBSg7KgKnOCs4qx9UN0pFSLcPwR8GAkCGVlZVuUyqZ2tEVEqeV6wTzidVE494rwCAan05qjIDBQDlKCuenyV5ryxTGne1lSD0AEOqSyCHI2v3mmhsO0b5Hkzaoov+95ZLl1bI2ftEKbK0C4B6w+kBerIKGiNMabiETMOsE2PgABebCRBuC4DAAAFg0PkTEP4kZ+3FlwZeliLAAG0COjmxfCNWRAAC6kBhMZECCQXR+i+odT/v2JTYn+w1FUznfThnjM/gAFQuYhGqCgRBEBcCSBgWQl0Na71jNgpeyGNOnXHYNKlVgxVmUnoxgqwQWQkYiheXKLiVLnTI9ENJlAczoFoEoJjNolA9H1nx4a8JPA83WAJ96ILbofJweF1D8jICDTtLRwDZyXE8Y8Swm5FoXlBLI64Gp042V0GY2V6SEIqujR/H66VXAc61b5hF/sAAle0QcGvBcQy1vBKGN5aaXGNib9ApslZYz0EE1inRI0QkiGg5wQQNIqAt7jH3+OYAC7zSAm3Ag7bgntoLCHmtheO5ts7aVYOQHG/MylclqBtmWRgBjmGZusZHNVR11AvumZoNMwxnUrMVps4EOzDmDOQCM+QdbgO2vTrxfG4F10IehZIJWk7b92vRaKbF9g8XMd0Gy/DxHtTkc0bR/RjD03SusdGdHL7jOtRiH25zpD0Pmf3w6zF9t5wRfy6hRsg5aCWS9YrhFPj4vKbUw3Vq+g1HUd0Yxyb7HPQpkzMCINaOXB1e4EZ5iGHMbhHg6a1znnof+cG7i8jBLmGCpKDN1sy317zn9dt0MuDWujvfPgLi/5bPCWJ5ONhiojWZb4d19pmdBKcapZxfX1FZH2pGLzi72jLI5eQs92x8zA0QSE54757XBeY+yiJyU8zHfkbWZU/Z9kjm6fGcZ/TkgmvI/j5+S34FAvinx/7GXqFDBOUMvOEypidvsquPwL6PgTudV6s6OautdqzXLctTnG1drSDJT5WdSQFdTtU9RoBBB9Tf2NR5mu2dnUkOyhwLyL3xQTUTlDQwBhROEjSbzr1Z1nTh37lgzri+lCB+j1llEQChFFX6l7ixBxA/WJCe2dzHiFRpFFxg3eVmmYH6knz1wO33jtBfS6lPgwPDTgKvknTHT11J0LmLnonFHwAVRPVD0JEfQpgABF+QR1MVTs0Mskr9/4KUqF8taFrR6EpAsAuD+os8rlWF5ARsuELQeE+FqCMB4AnkdlyJxFkAaI9Q7kcClwDDlFgEApnC6FNFopxY/B4tE40kVNbg9FidSFLNAh9CrcLkbdbDvEDJblrQRtglkBBUThE44kElIAkk9xHZ/o+NnkYh0kcx9C9VOw8c+U5kpcKUus/BAMuxMVuC3DkCS93p1Bzd0F6AwjrQGA/A7RL89JYjKc+tEBEiSlkjjFAgfwhsLZC4l09wFClCWQND+R+tSdMBX19CclExggJxLtnYwNHhKIAp5MtEfswVQZrtBwAVNlDl4tdVldZkg1REU99ktlvjeRvdxkeYKVktAp0tKA+sdC+dVZxZeQR0AApCmReDIK4+ZZAROdhDZKQEuDPPrS5QbHIvxPIwQ15JkP0Kw/ovfUvfg0GUXKFMNYKWFeAyQmNAY/A0hWQ5+Xg3A4vfAwcYlII8lYUdABgV9MTHErAOI2zVxRY/RFYvsLRROSiQXGlFVUw7JYNS/UlSjCgVlEpWIeKc/O0XJS41tHlVVP5dVAlYVXdRlCVI1MIayWgJoAKddBgJVFVSeC0CoWdLE2pZ/T/V0zfd1MMi1RCH/TxP/B1QAl1bkN1RXKwcAyAr/TfINRsdAzAhSfpaYaJbAqdZvPAhvRNXxfgahNNdPQrCxBUoBBVMcDNLNFVHNCYvNKbHEv0pQLuWBE0JyLReQxQogZQ1DVQyAA4nZO0I0IMCRZkDSZwrsYBM/VxA9dwbDBgCEToT2GLbUWQE5c8QXQ3B3LRF4BdfyZdP0tVROR4rATwejFTdQeQPwmREsj+CdAI7k8s4LJyK4BAE9SSI3fqAKeiAMWEsKWknQPSGEigF9N9DKAebKL9ZGKRQqUQf9b9CqQ0bkGqMDO9ccKDeAaeXPJrKCgUwI5k1hHDKvZAGvXQ9rdMAIuHODCjZlPgLvWXdcMkG7WbdjNw5KYfLjJbaoq5VbdkOTCIg2IgBfBU9TEcqwIHDAHTFfDfafEnCzVYjAWSqnNTGnJzBnBrXong2vQIwAJMJFc7tB9BKR8KsxKBNJKN4kx/YdKXF1NTKVLac1KEizNScUjtKKdF99LV9DKNZjK3CKKWLIBct6j5BzEFdbseMQRKsuNGcc4jLyLTKorJdpxOK6NuKhh+9ccAC/kR8lTidZ8ydAhXLqdl8vK18AcKLvyE0pJwredCNY9jyj8tEqK9ROiKAPooLmrgVnDYK+sAib8crncfAZd8rVl+9fjfdUqAdA8MqSkTKGKotITAzJYMteVxZtDFrS96k/wujEEy5rcloBsSAb8qZRxaZri8qe8CqErZswTcI/dcIA8xwg9GqsqwrMrNq9dBptqnJ4SOrESoINB3q+Ib94MyL1q6SyyE0qKK9cNbp6KETSyhTyymL3zY1ka+I282LO8ZrXdnr5rLKB8OMhLqsRKlpto+j2rNMY1AtWqhr6SVQLLEr+Kh9bLZRGaNqsaDofKKq/KtKaq9K6qDKbq1qmbhrCUyMdtYQpB6An8AbEbC9ObmMoz393UQQi4hBghko7zpLQD4QvUID1RfVwyVRBpOUHyMcnyCtczw0sCOSvzObxdCCIAwAjB64gRSDdY/olpKCmgaAVVmBaD+4GDh5jhmDKRWCeh2DqSUAilcLn54NiLzgPQOTPwSoHgrpxJDwSpX0xi5w9wkR6zdK06qpyTqBfwf4uxPASogJeRF4XAkhXAkgQQkgAA1IkJwJIAAIScAAFFUILx1ddxPJtASo1Bw7ZAE4VVhR4otZX0HzXF3TPTqIzxdStzpUUANQkB7joILJR0iMDT2KAEJS6QcTdkj5sAcobxRK2AVQ+RMQ8tu4qCxNJNpNw9E4gxpSm65T5iwoKkithjdL7tSpmBEAVRA9bMhUii6lZTSETj61/xgoBA8A31z6xrLr3E+Mj6mwwkwrzM9qEQ7K37eSn4+RqHN9k9gCqJsAkBeEL54D/ECLT08sMlBwAB1KKelGIOIBIEh+5fOzxNAYRdsyyPNKyUQCEOi+8duzu7u3uge1IIe0eie6c7QHavYSgGIJBPekFbOi0VAftWe49FQE9EiE+SWYYrZQ83Dcxzwp+newKGCnocgOgOgJVW4PME0XlGi80HUh+tAJ+vxEpdJcmgqAABg1nCKcsiKghQFMYIYyKuuIdQHuVE3EykxkwmRvo0RSe0R8BIdSVEf8fPt5AUa3JBAkFQQNW3OBmDMmxLrnPCesSuVyhvDQHsBBAIG3PrGXArsdgGeTGPORiGfwBGfgaScfBGGKWRh3lpSGVSA+nqYhANoiHuw8ZZnvmXtcUEM6QxwkVrCqKWmSquX4kbBnNLpOGcJ6kycOfhBuZoZdmzmALIHYkaIYdqN4YK12CK3NlKL9AoYgqodfsYfA2NEGSME2YvsRUJrnUChfz2czLaNyrJu7wPhVvnL9F8R0kNXmWeVga9T4DQNrBCNAUYFpVjANtoCNrZA2hSbNpFI6lkIoV0koAZC8dlBHTvPKaAjRIxMTRtHMSgZcRgZsQWZLWUnkH/LIHpZVXijqZYBWdb1qd1ulStR3FtXaemu4P/WiS7MClXNcXTCZZZZNqkv9mKa8J2tzRyAvBOGFf9h2SlfiJ6dsUSDDmWbTITWcNjL6KAjIFsZqfeRteNrZYTGcqRKNZ2XoipQzr5EXjwB0imrua6RNNVsagLIRYMEzWUDbItflxxKK34DAdcUXhVSkbMcZTuG4DrolesfnuPWfNpEUcQAoTskUEcmdisHsHgAg3rUgZ9YpdymUb4HXNVdqE9j/GtkfuftmNEWJ1ibxZdjyd/sKZkc3T3M8HmDtMDIdPVpeDZXimJaMmuMNuNrFXsUQo/RynjcA1fJZD/VKlQqAxwpAy6D4DhY3SIpIvI00rzhWR4v715psuEqSr43EqE2wi/u0B/oKf/sctfekslvcqBu03FlUuc1Fpn3FtVICuUzkuCu8u9YVNlc8R80CrksDwI9Co5ya2YrCoDsbnINDqhHDvhAsCjqNnhxivyzitBd4tYwYdVwB3Sojxlk5Pa1as46DqbgoN49vkjqBl5Al0etxa4spsSt1dIFsvDIQ5/GvZtvmSU0zbwCmtpwzdwCzcuw0HULHpsCSFKCcGgEZyUpBwxkYe5tmyM5IBM+jIQ584iL87B1Y/k89tRaMuU9BGDubj4806E59uIO1hU+4/hEoMrjRD7noMHkYJ2lHkTvtjLw4MfAc6c7JbrGyB2W9lrUPo6Xubzd6TgQGVUk4etE1Cfrwc4I8sTcZJNnBmIqhWmNsLuQeEGeGbIAWd9cpbDhm6ma1aTDm9GeTdrB4kORZDuVCW4HJZsSW4jkgAQEv0jXI90tUyaDZCXwoF010xxiu7ctu9wCe/0Kvrzn3jIGsjLoKiAcPHFQoGWCoCFWo5rtaauG5DqS82yHYGfzcw80eG80DXWIQCsl7RlWENQfQfOfQCwdHdwd7fQBXtnAChboNFHZPXAvrGPzYKiIAyaV6bsUaN6hzl6mxbGNBgg5qj1FoFkGYVMhVXkCez3BznihnLPSNwTzRq6CwE5SFAwDAA3wJ/ZCJ5oGQAqLuSOsGkCyd0aN1/1+uIEJICuDZNewKn6oV/3B+qQYq85WOPx+oHV5wc1+7GKmwC1FAvpkr2eY3xVCaYsGwGkHWIrvioUnwAYAl45VEQNE8WKUaJhowD15VGbUdxN9Bg1KNDNihSddJ9cVabop69sh0lt9I/HlBih5h97V8AfZ/TkbA66nGnLgg3ikp5HbHd2VTy+ITxpEeFB/HkW+nc5fRaas5vQAEBzkmlotkdNVfyxaTaetVCz8PfG7sPxdNJ96JZs8s9qSO5Z+YifcHhfYAzQvok/bP5/brtAwA+4eA5gygGVp6WGAhxq9s8uxVDIGyDG1NnTi+//h8gmqibPqhGxYTD9KA/TWbnM3m44wVugbP8BBA24LNAGVbXkEj08yo8biVLEHuoDB72Ep2diLgJAAMCQAqa0HTjHTUtAxNBgbYbDi3XwDUBmOJmWiNwHuz0DgETA+quQBzRpIQQEIbDulmYFLZJmCAmZsgMEHnBhBpDbgNhze7SCfsSECgG6Gw5MduBLAjASj0QCXR4cpQF6oJGAFQ12e7ISgaNDSrsgue0uOJsNFsJ8YVoJuWukLhl79gb8eg1ZDaEMGAxdey1OrKtSX56dGwNglhHYJPyOCTyZkVwfoKbD3Uk24NFmmLChrJ9Pq1Vcvr9TqyeD1YHkKUkD17SD8hURfTwsnAXLSRIAyPLzI+ygCLRc+k/afmIFiHj9UWGgYLqFzrThc/qKTKLghGIjyBL22/G0BZ1Jb79mefrKuDiCIJ+0/gJBJLqpx46SMHg0dIrqoTjo8Un05XCeCnWGR+g62BdNAJFVoor8SUDgvUKyQ4YSF2OMhOhg7EioikLQgCWlpAOQQjFByfAZLOE15BzC0AGgLOuwAQYFlFWP4IUHwFoD4AnIlCZkFuVqhXg/SYkYVDaUGRfwuW5CYCB4GoQ/w6EDCCtPWweA2EWEbCSkiOwCjcJxmn9LEWgA8K4kKI85VAO+2RbjoABwRVREuX8xnd4kw2QgY/hFh5goUFSbgPABBBo1P4s8dIjiK0iFEcep6SJBHTipx8cKZRCoikjqKidGixVfHO6CTYdFTqA1NBu3VJEK00wdpE4UWg9r409R86C2oujNLXk7SjtCvovQb5NJGU/BMkrCN5R2kkyQqHdKKkZT3hMyv4OVIBDOIWkLisYB7IhCeyQRXsyUH7JYJOolQaoP2JpKfUWRtcCWtAd4kCV77H51arXCmisL1BQl6I5AG4KknqLEVZS9ATJtEJoD6F0+44UEupB9yl4ncN9G4IDxlICtsmthUGCkniTSswoKPeHucEaLbCpGeo/Yab3N7qBLexw0QmyXEIaQFOLOHGqgVETZ9oklrYCCShJrN9N+F6Avi6NtIN13R8gbelRCWzhk/R3vIgOZ135Js72bIB9qYgUgW0E83pFsqW2WgWs80gYOnt+zhRdgFWFoEcdiOKhuMYSWsBPM4WtKENXEicNVJ6JPIA8u0nIyAY8EMgDpb4Q6bQux0TiLjsaKBPiD+DxLnoXG3YShtWkfRu9dkVgC0Ryj9KVjvS+6TCW4CGSZQT+KFfsOf1/QYUv2nE6/rhVv51Q2+hFOBNBlajw15OwE3Yf9V6q1hDR842kdIUREYprh8OdvGSmfhSSKKGgCgbTUWxwd7KOcDDgBiw4MdruGQvTOpUqr+VsODmH8L7wuAvdaqD3QLLz3NhTZG2l+R4cCQTwSoIoUIneiQiMIMtFJ8IHSdZU54StjCf7EUV6wnYNkIBiQHLIqIyR1kEqWk0yhoCk4+C+YsnPUOCyCgXUsmRDK5GRjuo0xYhGU3DhYGhoNiVc/uVIRF2qlp8AhlkI4bWDbG6RMiLCUGA6EojxEBxYqNvG/11ET9ZJ0KMQsWSkKCkCJpeC4SpI8qc1iaTfe8Om1GmNDdJ/NBmpiJ2EUUrJJHefGZLcoOZAsa46oXqA3EX4LQ9InkBgARZP92uatS7MXwkJSS9RzGf/KqKAKCpZAqfSMgv3DJ/SbQMbVlqbX9hAzcJYBb1NbSgLGdLoowmuOMP9pTCyCIdXLlCGQ49wCudBGuLHSYKrCHgSdSrqnXbqtZmp44pku1ImlzippRGeafySyo3CWxu1KFtEhGyEsqgGIytI+hJLXJiRmMjwhdMCCojkRnadcmAGWajgu+IccVKIidZUiuJupaYtdO3Hcg0iRJGCddRIYoMJRTTKUTsnBaJJ4ElRAFjw1ioVlci2SfEfoUqjGgaI3MzyRYwEojEYicfDAHEBiAYA90TTTZF322bYCbQi8SIGPQyBEhe6aQEEPyDHoABNZcEsjeLn1mxvIUMXG2OARj2A72XbF0IOHUSvUaY2pkdWKb1iWAjYrnmCiWRuTTcGYi3DskBKfEa5xyPocKN5kWzySVsjhPYDeTVdkMY4kbqEJnEwoFJeEmaYMVoYLTqpeom4VuJWnikbga4rUnSn8R6kVZM840irUtEXJG0m8ttHujYJujgCyZHZL6P1TypFUt+f0vaSzHPSOsGLA1Fi3+mYtwy+rX/O6k+mJkD5Twc2mmWhmsssW2ZENBKUNHtg/huAIssaOmkE1lxhE/UE0D6KXlGAxtK+HHmFzfs4JjPGIsy3vYI93xVAMtlnArYHxZyr6W3kpCAkrxH0D6ZrmEAbQYAbSF809lmMwBOlL8zhC8gnhpFztNyIzOpKIia5qoGFWiZBc4LWKUpBc+5UiQsHrSiJehQEGBP2lDCX43xx/ZCq+0Vm7IboV/bCjf3/ZCTIMokssa1FJk65yZ40+SbTMU7KSGZzUsjEYonyZTNpsHBhgh1pyYh8mf9GTMZLMimSnJamCydIPKrEcm+5OHxfd2YGuSohQs92fllil6g52MslmQNVpTSirs7XCnHgkfSgTGUwQaVGRiSDiczZonQOWTOFo1Tsp1WcwRQBVDW9SJzc7qXzKnDXgRsStR6ZVOKUQ0loqIO4mGNK7py3sP2EfD9i4B/Ympr7ToRUCiq2Ld88XfuXJNnGnCFxcXKBXNMsVADFp8XGxTzgVrhSaaW0+ENZmMUlL9pQSlISEso4NU6sAfZZBEq3Dn5l5YpfgHdKXq7j4o0Ew8fyg/nyBvJmYtUqIl9GniAoT+BGbXAmGJdUZ3IBRtPy6gYyKAEKwMF1AWG4ziuywmqBSEJkVcNhs8VNtVCrGITt27/XANmyHmgrdYWUigqSr1ym9Uxg4IzPwAfyvBayjE9gsmO3RFxbIgUbwFYFYlIVP0qi+0Zfywo9BgMeFO/sJILZiTMgxmIFUjMmFZdph0IAuNwEoITRXwYYK1IeARXcqSuJ0MrmivWEVtU6TKjyZylObTFX0ai9urv0JWJSgsIUmkT1EtZvNAizYxOE1zKyPDUik5IJiZEtZPK7kAwp3Hczr7OleFP8MrJE2fpB8Q+WPXce7Svh3JEQtKR1ZYKeqDgFQqaLOVeJUjHN52aFe1YmqdhWgG65xPJGzMpGCR/Vl2QTPiqmoaAxWGQd+t1ArXzIq1lq5zq53c6edoAKoClCwF8ICpa+g4zQPg3TrYqt0e6PFa2rq67BagtYBOZirKSJLVUnKdMIAQEUDhguYzFOB9ATUqok1JTe4QB0pFvCm1QwwaewCySV4mVZ0o9tCQXXwVn4O66FnrheDwhCkoUbYqchrb8BbxzY61S6p/jZKwgVoPtEQCTRtz6V76dibyppH8rv2WigSTosA4iSmoBinTkjmX6VyoODqxcCPlXUl4OY7SkebOiInqQxMZnA+OyHRC7VacRQG8Y5zwCDDFA1nejQSuuLCZq1batzh5y84jKAMYyxhjF1jBj1YYYgdQmGCpiHgEuKM4OgqqVUib3YnYNVdwC06QBAurGR9Y6tw1Oo11BG+IUuNmkkbNWFwIyRRqeBcB0sNG3jWZH41b55Owmt2GJs7ASbDuHHaTb9Fk1QhlVCmtAEpq044zgVRgTXiBChBBaDaeAfAIiEtiIRAgBAVpJkmxkx0kV+M1FePD3lTwque3aQNdJoq8hQtkTUZEoBoDBgYtn4BVG01apBbKCeW8LZFq2LRbzgpWogPFrab7w046/C6fLlvUjg4E8WGkHckBolLBRvXOpQOr3T7xK5Q81xZrTAD5ANAAAVjAAt11QsAMAPE3iZFMw4U2pmjNvm2LbpYsAYRNStrD35OIlXe0UFsbJtIVoMWmIK4gYFXBzOfAIxveH+VAQcxXKdNMop5VX8YNPEzRYKt/bCrdFQHfRdPA3z+bpVlWkLVlv4HQQEtiwoeMlpYLor9Vmw6gWyEbprgA59ECEHLEegsgA6OctrSIW60oIrY8hTWhRTba29PAngM5l1M7EsIY+Z9CmF6u8A+q5+Lc8FA4LhRbb+oO2hbUtrqCrb1tmIAqHzrcIC69ty2zEEdtpWnamVNIi7d6XM4Na4tmO7gI9vR3Zadq72h4J9oyjPsOJ+UX7RooFXdhtFkI+/qDpgwb4pVvtQLVlqq0w7mAUTPkdKiHHw7EVSwpHWsLS02QMttdNNqY3kwEk9I3OyFEPIG2EbAieNCBbnTjWPhiVkIclblw0CpcBOEYAdpyuHWts+QAsfHUrEBBFyC9QsRWM9EhpvQc5ghZCP+HLzXgJI2geSOjiorXQYt56hEYFAu0HZzQa4TnWlg6iYAydx+cbVELhTbgtiZpDwZTv+pO9iF4xetHDz3R5Npts2wXftpF0baLplsKfcPMCJz76lvCeiGesvz3IBEdoDGQAGYwATNQ7avnl2QDFd0iTLRjpV37gwwQe6qFro90BlTqR4PXayq5VG7oNz+9RZhTg0A7Ld+FUVQ/1ag0qn9FuhDVbpgOiT5AsGviQAc+0ZcJhUO0LSVri3OgGxEIOHeiH814zSuCdXVf7oZAGrOtbk8Jk72H1aI+t1oaPXpquo8IpIfqtXWVth1gg/9HoBwHiwvadMTQfQmCrAnEA6gOK6vCnhyQl3wApdQulbWtq32XMFDSh/bXfppUnbIBbWs/N+xpFv6Xst2odsKC13Pa+Ar26NfroG7WhYIFoFgyyECTd4+2nvK8Q8ti1laDCJiUoXjrL1VApQmxMyE+Pf2dhP9uANeU3tXDu5IUQBqDT9tAPoH8o8GmKdAb0UoaQO8BghabvAN8TUjQOpDWKrLH27fgiMOcA0ChDQ8KAEIe7c7rZAaBZAzACwIV292I6KDBM1LcnVR2zxxQ6gAABINBCQpHMFNyFqPAIbgLsDkBgDcKgUndOcibR1qnhd6stXKdXo4DhS4BoePGZ/K4XcJ7GwAAAJg0DxNpddQIuQLLcUSZ8AgaJCafUAoRRdUlx3dtJkDT6FIgsgKcFgGoQOlksFYrAPRF71HgFVCx7gFrGKwKTL9GgAAJzSY5tBUKE4UBOMOI7QIaM3mWjbBgABkKwsgB7MCDezHg8CMds/iSAAAtUoFtjHrhzUgIIYekkApgT13UZJik1SYfQgg+6Y9LbBTFSAYkTUuE5k5SepORyY5tx97RH3tG4n+k+JxlC7HoiYgts/abXYgDF0KrPYhkGFbGo0gzlexrqX8BPpAhy7dDfABA/RGqPjGJSxhm7Z+DMMPbaolhs7gJE8aYGEKhuhI9+lyO8SUjkBpA+kZB2ZHbdkqiHQ7oMAaBqjHpYAhoFwA9hWjmq5FTqq6PEy0dYZt7s7GPTXIeeUQl4FKz6G5QNIrisTGLu7EAJ0T7AcHuBs6ATx6Ib3ChKMC1BRJVI+dVen0Kp5sh4opZ+QAoY0Bi6dDdK/Q+6dAMpn7wTXGHpAOdPxGVFiR9CmbogOIG0jIqjI8MTB2BnsDRgSk0kHUKuAx6WUpjGQaS0dGUtRMjFdaDXMbmx6oJ9wVDEPYvlzwg++tgngOl8SRD68voRUB6ZWxeRR4Y4teRYCtoFkLsTlEOh4bVN/W6pNnRuiqbxA7ExpIhS40XLh9KUgEE9MnPgCfnbzriZPVUlOC/n9CSkH850BOC8yZ2zgZie4FjE4n7ACfKwOXMbBUVedexqgtNuOOnHlDhZ0RHGF3Y3HxdzxtDjJgnRXHXjDiVAKLhzmcjCEwwR0D83OAFIATExGBH0gQQitoiCeCoKRzn74HvDE++LPH2PKhH6IVaH/cgG0vFJ6VtwNwi2z/D2Jz6FhYEQB3xHdhKLzhn0d1Piij73BQDGHlRDAVXwFDDF7bUxbOMHauLXcFDvxZkxy6mVzp+0YxPvAmHrT921XdEhpFfm4xx+B8/lD8mM7Jeh8M5mavHPfb+zU5vI16dnOFHrd/powHbqDO/AjLVFv6HEGkBQglAsIG/aaWkAaAULgvAQDGfIParKDCZo8wApoACBFCEIc83mNnWHwrAdQplVJElOey90QJo+RFEA14MkgK/AamaWGvXheySkFYR8YUUVTPWBQ/eOKZpHeBEgKspQIwH44jo2QCQHkLvxtnuypTXsxlD7KJMnpFrEexLHCltljhTQjdbYl2HFBj1XAqQDIDSbZNRzo5BUB8wFGSz+soYS7PYA7MoWtMNYXYDQLidTM57Z4NgGoBCBityWetdKEYGNqZLchSFWBKjHmDVYBjz6ARPHkwawCIAmYCjegHCkbmXprQPlg4/5ZYv9Y9Qn9YK/sZeNhXoqMQf4ICmor+sXYwIyYnulSRmqkxYp/JVn1wD3Xv1LGnwjJctw8GmtZEpKMfkTi46HoARwnZpYTyvptS+hIUENZGulDYS7iLUHBBKjyBiG+8a9ev3/GBNRA3qwI7KrAC6xyidiY/J2jAgbpMQusWTMm2tAfGvjId5pJqj/BFwP931pPHCkRPHHL9CJ6TPE2kwAAOC9W1PS0SjCZx6eQG5MCg1W9utYKEtzJHTihIg0Adxf1hdVFIRGPeDZCqVQWiILOzsG2LQC4V5XT+BVi/n9vN1CrBJRR2A/DmyOl38pgO6e9wxtHIwVooKLXe9cjZgHPTgGSK1wf7V22SAw1uZqUYmE12qtwUbgI1ZNL4QYVzISFdyCGZX32raIDAF1d3M+79zyOvVeltTpH2T7o11y+NcIWPMqFluUxg7PD0V16IjNzY9aAz0aRyIB9UgPoS+U8AUL6Db85SFETp2SdnYP0hhYlZVmNOAnQ212Hpk0hMZaDkdY4Z6DR5TK1e0Q+XnkABF9CMFQfT1uUts3mQHN+Agw+qmDR8zBTfAJiB5ixL8aevbHb1yb1t2MJECoR/gGv236eYspxxPfay3+AnLHSyAMJtEB4Bj8TAftD/He1no/EtYWhX6XvATb3LFATNZklTrDFSQQqYVv3c9szVvVQ+s1mbwt4qs9QqSByMQq7CwhGwIcT8YVrnp0BQjc+NsJDGh5gBQw8gLbCwD25X2eAJULAM4WKiFwgRaAZJVcSbTTBOJP4HBiqiXYOh0sJPQBiCl7EBQdwuEeKCpYTy2ym9pEpx8oCFRWRvIuV/BgBA6fu3AFGOYPinl9smh2dB8EcNEUvy4WzgHYDAxXRTzbgULvdroPMSIve8jgDy9/fIGOPn0CA5AHOeNJ7F8cgEf4PpmyFkCfWZqLD7CNHPVMwA0nkQDJ7MaAjQAbUXZowBTAaAVAhIIfS/L8/2QXNCSsKrR1Sy2vgaNnkAhw5EyoCX5/1brfAVco3DMrSeJYA+JOLr0AmOp4UUJ8hFglyzPwt8JsJc5YHOFKoAkXpIaAyffo4JVgYNHwBllLp9CE29p4i4TEPM5ykMdR0VO2eQB8gtvGY1C6y0wuh91OosdWwTyDRoAgQcuAMmAT/F/j7hk4PU8CDt97w+CqeEMlcAXh7wgSUEUM6971oFA6asQCcABfINZDQEIR088LiV4I6FyPbtaBd5G3mDutnFy84J7sr+kGOxJxoD9diOvWZeRZoNePsO3tyA1Z20gFnLu2uxoieO/TG+Oh3k7nYN8EMjWueYPXKgSsvtaH0bojrJAPRCJfyU0i7rRjLW2X0EW0OcF2actqXbqDahwNADh25zaTFQOKCTEiBTWaqaOv0abJfJxc5PQ0h8ZrL8eDQB7ChGq7V6EF9QCHPOyfJxtqUh5cQsOPXTE5se9xOnP5HvTc54HchsXOP82oCGLFddI9w2g4UGF1PbUA0CYyIjaSuxeTO5rnu3NEIS9/YkQe3vzhH0C+yFqvs33YQd96d11GfsoXX7nV7TlAD7qEnt7FEGanumZLMY9I7GtpewfhBCOrjojwLNzUQ/EWFHbNL9xRePKX2ULf7iLeCoftwqn76vZTR1ffvgfIAkH32Sehg9yQ90EfBD/9DI+aOZ3iHAIoo+Uf9QxHRlb91R5I8AfyPoL4D9R7fsCA6PDHj6/WhRQsfHRp794Ro8fuIdF4anijxQDZhT5IAfrjQJh/6FafQX9nEz9x90/XNOP6nn8AZ7w8wwHLRH6+01dI/megPVH0D7R6qvn2CPxSSgtgE8AMASPYAALz6Ro8f3EtX93q50cPM9HrQTbuZmNb54TWiFPbu91WlD3bo0stDnZCteIfWg4HHJC92JHT2kPbUg4PR5MStvoSTH8cvZI4eNN0uYEGsEsCwmttXvz66lg215uPxK3pboNWMJ+8fgYoBHg2tIvrfkB1qBMLYNHO47kieOpLPQkgHEASRcAoM8geerlxRdUVBob6aCD+ExCRA7EgQMXULbHoWFcAMdzEHYFA24Bzv7AU77WGu8+BYAl386AnEJ5u8/ySycU9o+qG7fEYGgA724S3J8YxdHL/7i8ExAbhWL9ATECuTMhi6pI9384CQkxBKhXvfUWgLD+whNQrA/I3xIWeLe1gbvr3tHxE+PTKnQHQYeKJiBNU2wkfWAYer4ne9Hy/sxZm4A/xRfpZ7IjkWgG4eGfxQdsdO1Ez+FJ//IHcrZ6Pj+GSfyLypzDaXzsgpgeP2dQ+qmMFG3LD0dwEIYEVcEycLOU7aZlJb84LJ1JPvxPQcDthSv5R3ttPSiFIG8durm9JRQ0BS/58HwhesfFJTE+CgBQWz2xbE9+wB/7fsINgReKUHFD9HoAIIcG+D+YdYYOzkQJIFtmgCpA5QqQJP9ADHrqErvSflPyCEXg2AQQY9DkxkGgCPfsIef1P+n8z/Z/c/yf6P4X4r+YhC/NgNP6kCSBZ+c/CrusvFHFfGZCinIXhKDH6P0mQQFMaAJ39SAT+0/FMH8KP4pgggts4fjIOoXH+T/U/M/uUHP8gAL/aTa5/kOoUXj8MMg8/sf8v9KCr/9/Y9JIIf+P+n+dkVf9v7X7X/n+3OnJsemPXGhBheEIbMC+OGdcNeNq068JvZmRhEuwYKAUYqWX5QcYKfUUEgBSgGwDlBUzW733AWAWR0QAFGeYXPpTbeWDCBCdPrxRQAoCzmQB0CJVx4YpbGWGwx/WKSFLdU0ATBtRLLWeG8A+sfhRLwJnfzkOFI9UAUp46heiAm0uvfJzfAeMMPnURwYEX0ZQG4dUDNIYAxsDgCsCF7wtBFnK+1Qpbbe/FDdEvScFTtwjYB2S8D4XgPFQadCQMvx43LRHDdyAVxB+NjbAEijc3ba6hHtjdN9iSMJ7GcyntENMq33dxJBwWPcuACH3KRRvGPU3VeEPfSE9fPWq1C9gvUL088ZPX/jG5QMUAKm9MAhcjQAMQMgSgAx6VAyAQVAdxHg8+EI70SATvLmj4QUfS7zhw0g3R0yDNvQ8UYxmMZ71u8Sg8PG5o6g170O1SBXQEJB8lcVwf5mMYX06QIyG0HF95fKX3gZ3UWXwiAhgmmBGCbQZX3m9Vfc4HV94ATX219dfRWjaDKhP/nX5AAr73KgAdZhj98bdDrD28gfNKUEgNwO1AR9VmAqBKCSEcU1BJFArHztQ8fRCHBdUIcXxIQ/AwSHp9GGZ5jgCFmKSGZ88AcXHKCrfb9Em42AHhRSUnfBKjD8I/KPxj961VTRtAn/Gv079s/O1Cr8C/IvxL8Q5LtURDeQZEIz9UQ9QnRCG/TEKKD26GwDb90/IkLKD2gvJQhNxXQP04lmMPfwn8p/Lfwpg7UPfwv8r/NkM39q/TkPdQ9/YegP8j/E/y5Dz/FfzX8RQm/zv9xQ91AJCX/XukX9KTGwA/8J6NvAcFS9BWCL0VYHFHFhwHJLErwQgkoXohK4XQSiE2AoMhID9Aqa22J+/UAMThhA80JXMDAb9wiDBAjQH9xP7do2i8DzFHT/s0dQQMgBEgrAObp1Xa0EQDkAmu3RwEgDpBmI3dYyx99+xVcRKRc3ccHoMKvTIK2DcGa6mosS7H6xit/0LtjQU6eAuAxwqNTEAaCxVWQBHRswSwHzA1ebBlzDTePjChRMQMHzR9xfMXXW86wp4LF1Wwq5HbCMfZKHiAcfCoFbIOfJsKAD8wk4Mlh3DWnyeCCfPADF1BoE+zpcMAVn0xAOke4OCxVw+UhrpmdLcPEA2AVcIh9afL4LEcDvE8ODha9fcK/Ujw0I2h9DfWoHvD4iI8P0Jmg3KB7DMgwcPBDxdfINwgBwiukAQcwq2HYJQjLsHFd5TbK1RMfwpqCVZQYHnwHZ/uJcnRx4AAF2wgwfacK+9R+L8ImCxwXtlO9rQftAwio1SCzgRpDHCJbDXiMAU1NsIEcPWhsIoW3T5hgrHy7MYAbYy6CDg1W2KxCwMYNoACI6PmAixAl8OVd0JWgDAArAB3xQDXvLEkgCqnURExAZgtwjmCJfDXxIAtfQuB19hQDADF0z8MTDigAoCXm3JbeCXjd5j8F4GGsBeLHmSQCXfqX7AhkEENfFsgulEThfQfpFuB1TV9BC0UeJdi59THOiIhCJnTqFidCwDEKb9WEUQFsJMQfo0XgJ/OvyAQLwCEFp9qwoW27C22BXnyUDwvrExAgKEEGW8HvVhEUINw6iJoA0fRUKJCxdClE9IlGSv2O89I8XQyi0Iu0BN9mvOn2kAomQs3N9yoyvwb9n/IkPhCBw2iLbDsIVv3b8qopKIwAUogKBgiJAxsPSjFAkSOH8D4NqJOA6fT8EbCwIkgDR9KTIei791/TvwpgxdXIDPwWEWKLH8+Q6fwFCxdWqNp8MoxaNu8hIqnxaiSANaLh9GIriGwjtotH1ZCN/Y6KYdRoi6JVCpQw6P5DZ/W6OSj7opaPF0BI56OWi9wVqNYZ2oz6I+YtonqJ2ipwpkPyhgY6/1v8xQjIBx9cYnkOlDRQ+/0hjpo0P0UD4YnZHlMJImmPRginB3xfInvFSIW91IxYM0jlg3SIRjtooiw2j5FZKmyBcgCgCJiBAbmL192IsXScBraLNT3BGQ+EHixDvfqJRCDot/zVDKTMegpi6ouaL6Dm/asKZEJXBsg2irAbqNd5cGatzwUJiAhWQjAnLL0bAtYSx0eBZI2WSuwJIqsQV8SeDRX3Iawj8CotQjMgEgDeEbGPrAEARVTbZ4pGumUDlnTeDKi2rL7VHtDDZwM3dirNwOQMFzcVXSDKg1yJPxmMEoP4gMYm0CeDyQmog6xmdH8FIiAXY0nejzoeHAyCEIrIMaBc4gYMUCC482KthctBsPJCRwh4LLjLXCuLcIq41aORixcOuOzC1bDGNzVxfXkGYwlwl4I6x1wwuAwBq4keNoBWfRmBIBdw0FDtpy4o2N0p14m8J3j+44eNN9oLUunVYvgtHiLjCkW8OUFQ4YaGPicomoHXjfgo+N99rxPeJcRa4rOIbiwQ6EMUDKAckMAj98PmJb0Tcb+IqCG4tjxtBeg1EyCC44m0FLiH49+IHiyIlgSRjTfCBPriu2XkEGDJfSYLbjmwq2D/ibsVGNLjuaViMmDe45BLPF0IoeIwSYELBPHi+ImBPpj8EwiPgTQEi8AXCAoW0AkirUUcDYj4gN+NoTK4kPhXjME91D7Q9mWhRyBAEjrFkSRYiRMYTQjV3RygULE9FkT9kccBHD5IwiI6xTbLchWCSeajHZiBMTmJJ5OwbchWCeASgFfQjcUgHGhcAH0iYSoE8TmYw2YtSIWClg7SJWDOEyePnDBfXhPFjfE3SOoTd4sRPQS3o1eJ78tEyAIUBjHIixMiWQC4L4lBoZkCRFrIp5Rol50YAgwZ17bhNgA5MaRLXj4cZyOPwqg5uPxDSQwv3JC4ohKOJCpomaLbp84vEOni22QuImYLwAqIu87aReMwBzOGtggTykrREqTcg6pJT8Bog6PBtyQ8aOpCu/FUDujeE2BJthyQ9pKXJOk27UYYaEwCGUTR4h6Wt8sAMZJU89owaL5DOQvEL+j2QgUMWSoY3hPaTuaPBMESCEjpPbi/QMhNuY+4lBJPjGEspMViXIpuPGSrk6AAuTuabkNBirojkNuTpo9VgeTWEuX3YTo+fxLeSz3ABK+jPknZI/iGE/ZMgARkrKJYTeQYULJjxQppNhTqYxFIWZuaTxNMh5g9Xh8S0AHSL19kUohL9ASkoWLkTKlBROFjKAQZJUxhk/5ONt3Em0HBTL/UmNlCCY6FOaT4U8YIpTyQ6lLPxaUjSK0iGUvxNeSWUm0BCSVUsJOESvk2hOxTSkg5IwUCUyqLViVQ9/01jJU9VhWSAuG0FaSNklFL4SrAERN2TP45+L+TDkxuJyCVPE1LRDSU3hOATS8R5Nbi1UoAJtBYIKJmdSsU6JMwTvPR3TZB6jKMVCANVHqxHg+rWL0DDZ4MNSiZYpP42DcbQXAPx1uIKUH4NWwFzRcNZvSmUj1I0Xm0YsTjAK3Dx0wYWwjZRbHizv1tgUbjNhwmeRUoAVIFpHpAeUMID0hc0tRxZAC082yCNFrdUm+dokjtCd0cLVYycMNdS7THABfL3jXogjT3w/i9If2CUtfDYMJWs1TPwGIVCA0+lCNreSs2tAx0hWADtAQPHlfQojQxDAS4jBOMcC1FZIx6xt3UqxQNyrOey3AcjUAyeodgkq2Xtv0/dxRdbDM+zjTgtULVDYqITOVBwEIZNL3M/Qn+2oNp4TNMPhw1BnRKkWEXNIPs/QK9MtQA6EtLXBBDHsArSuAn63kN6LPmzrSWLHGCbTv6UKzbT+1YnXCY9IQtXA1CMgnSCMhLRnnNgoSWUyy1aEtAFp0inPoUJ1iGBIElFrArlx7wIgHtIPIAVAfW61iFS2yeYK6dBBiBqBIpFoFcXIEUpYmaKX3Lc8AQ0zpVorY0ytM7tCYxEtT+fxhfSQDQq13sgMtON9M93TOPv0TTYcFakXMpe3cDQM6DHAzADWNIMA0QVJHLBNAAgGaNurZDNTSYvAMID1U6cLNLQrArRF0D6I/OwAB2aTASYNAIu3ztO9cez7JfmMsT1M17dFg+Ei5RB3JYUHb7xosedeAifjXEXsnwsMcUrNAF3o41w/ZDwVrIHIwUCugm1UswDCeNSRBPTzMAAPXztjjXiyazxdSbPm1/6H5KicJ0arIUlMQcbLX1psiTDF1ZsoWw2yNATLMRM4/GuMaJVMAgE/BUQEEUo0sbWGEQAnuMrJH1QYYnQT9nlXhMqyVssrzGyS4L9QmhFIuTMlYVUL/lhh91EBAeEDXEZ3eFRs2qF5BqstIh+zbs5mUCBpLcgN5BURePngA7UTGT5M2LIgAUoSAZFU3htAOHFXd8rJOKcz/tYDP8yM41DSgAkWZOVzAYgCLMjMWAJcAHixGXkDOySoy7L/AngJ7jn5UcpN1LxPQYYAayJCTECyz/6BjPzt8s1oNpyPoTnIuyHQHnMyQWshTLazpAJ7lSD2gyHJ2F0MKtMayv1VXP7J5ATMETsHSYTHWyps9DldTXEYTGbAWQasCt5o05ryNzfmE3MtyFsopnShygmHM+zQhOFFmzXcisLQgzc4/Atz9subS2yY7WbLtyOQB3NQh9U3ZGKzg8+5H2zDsyXLIw6chXJqklc67N+zNctYNxSWlHXKkYqsv3Pgc/QWlCBzAwb3PaDn+fNj9UociaFryHpVMXRZVs5vKqtDAAwE2A6wYuHy0DgDozJsxgS4DQA55H+xeBJFKgHeAVgL4HWAe89oCwsBoeIEQBM5eqw6QDUYqjWBu83vMHsGACTGhNCgCTFoB4mQ4zm1cgS/TW1MsgQCKA0AIoFvzPAeJiKBaATLNoAGAQ42hNDja/Mv0C7HoFaBe8tAALtDjQoEPy5tOnQEBDjCAqKB8gR/ILtaAAuxIBoTWgGhNPACTEyzPAfIEOyv8ubXfyigCTALtt8hfIgBIAAuwYAigAQE8BoTUgpIAC7aExwLooooEv00AS/QEAC7ObToKWC6ExYKlgeJgEB2Cn/MyzCC3vIQLL9TLMgK5tObUYKGAWgHyAIC2gCKBH8w40v0lCooDm18gCTAEApSSAptg8C7DEC8hCxfPW0C7NAHyBL86/JUA1tGQvyAGAaEzQAJMfIHiZRCgu1MLL9Y/ILsigeJhPz3C4AsEL/8xfLPzPATLJIBMszLIYABAWgCUKX8+QqALRCubTQBmC0wrPwlCy/RkKv83IFyBoTAwuILb8+Jjvzwi2nQLtL9WwtYKlCxAtMLMs2QrYLgCmQqoLPAN/PiYbYAQF8L583vJPySAObRfy4C2gBYLL9PAvyASATwoiLbCoYrm0SAQ408Af89wqfy7CtQoEAsiqAAYBDswoHkKXCy/SPhaAOnUv0ii4AsOM1td/IkKbYaQpgKf8xYvGL5i9AEv18gN/PiL8gLgrm0JMObS2LPAQ4wLtPCooBsLECs/HiYUC+JgeLL8wIvsLEmPwuIKC7JooYBMC1gt4KSABwtBLpCtAtmK8Csgp6KTitAsALQSiTC7BvgXvLJtl8iBDXziKDfIKjREfQCAA= -->

<!-- internal state end -->
<!-- tips_start -->

---

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

<details>
<summary>❤️ Share</summary>

- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)
- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)
- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)
- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)

</details>

<details>
<summary>🪧 Tips</summary>

### Chat

There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=FalkorDB/GraphRAG-SDK&utm_content=116):

- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
  - `I pushed a fix in commit <commit_id>, please review it.`
  - `Explain this complex logic.`
  - `Open a follow-up GitHub issue for this discussion.`
- Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples:
  - `@coderabbitai explain this code block.`
  -	`@coderabbitai modularize this function.`
- PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
  - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.`
  - `@coderabbitai read src/utils.ts and explain its main purpose.`
  - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.`
  - `@coderabbitai help me debug CodeRabbit configuration file.`

### Support

Need help? Create a ticket on our [support page](https://www.coderabbit.ai/contact-us/support) for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

### CodeRabbit Commands (Invoked using PR comments)

- `@coderabbitai pause` to pause the reviews on a PR.
- `@coderabbitai resume` to resume the paused reviews.
- `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
- `@coderabbitai full review` to do a full review from scratch and review all the files again.
- `@coderabbitai summary` to regenerate the summary of the PR.
- `@coderabbitai generate docstrings` to [generate docstrings](https://docs.coderabbit.ai/finishing-touches/docstrings) for this PR.
- `@coderabbitai generate sequence diagram` to generate a sequence diagram of the changes in this PR.
- `@coderabbitai resolve` resolve all the CodeRabbit review comments.
- `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository.
- `@coderabbitai help` to get help.

### Other keywords and placeholders

- Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed.
- Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description.
- Add `@coderabbitai` anywhere in the PR title to generate the title automatically.

### CodeRabbit Configuration File (`.coderabbit.yaml`)

- You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository.
- Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json`

### Documentation and Community

- Visit our [Documentation](https://docs.coderabbit.ai) for detailed information on how to use CodeRabbit.
- Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback.
- Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.

</details>

<!-- tips_end -->

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (2)
graphrag_sdk/model_config.py (1)

38-73: Excellent enhancement with robust configuration logic.

The new with_model implementation provides significant improvements:

  • Smart configuration: Sets temperature=0 and JSON response format for extract_data operations, which is ideal for structured data extraction
  • Robust fallback: Gracefully handles model creation failures
  • Preserves user settings: Respects existing configuration while making necessary adjustments

Consider making the exception handling more specific for better debugging:

-        except Exception:
+        except (TypeError, ValueError, AttributeError) as e:
             # Fallback: use original model if creation fails
+            # Log the specific error for debugging if needed
             extract_data_model = model
graphrag_sdk/models/gemini.py (1)

19-41: Consider adding **kwargs support for consistency with other model implementations.

The Azure OpenAI implementation accepts **kwargs and passes them as additional_params to the LiteModel. For consistency across model implementations, consider adding the same support to the Gemini model.

 def __init__(
     self,
     model_name: str,
     generation_config: Optional[GenerativeModelConfig] = None,
     system_instruction: Optional[str] = None,
+    **kwargs: Any,
 ):

And update the LiteModel initialization:

 self._lite_model = LiteModel(
     model_name=lite_model_name,
     generation_config=generation_config,
-    system_instruction=system_instruction
+    system_instruction=system_instruction,
+    additional_params=kwargs
 )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b316fb and eedc491.

📒 Files selected for processing (13)
  • graphrag_sdk/model_config.py (2 hunks)
  • graphrag_sdk/models/azure_openai.py (7 hunks)
  • graphrag_sdk/models/gemini.py (4 hunks)
  • graphrag_sdk/models/litellm.py (6 hunks)
  • graphrag_sdk/models/model.py (4 hunks)
  • graphrag_sdk/models/ollama.py (5 hunks)
  • graphrag_sdk/models/openai.py (6 hunks)
  • graphrag_sdk/orchestrator/orchestrator.py (0 hunks)
  • graphrag_sdk/steps/extract_data_step.py (2 hunks)
  • tests/test_auto_create_ontology.py (1 hunks)
  • tests/test_kg.py (1 hunks)
  • tests/test_multi_agent.py (2 hunks)
  • tests/test_ontology_from_kg.py (1 hunks)
💤 Files with no reviewable changes (1)
  • graphrag_sdk/orchestrator/orchestrator.py
🧰 Additional context used
🪛 Ruff (0.11.9)
graphrag_sdk/models/openai.py

3-3: .litellm.LiteModelChatSession imported but unused

Remove unused import: .litellm.LiteModelChatSession

(F401)


22-22: Do not perform function call GenerativeModelConfig in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

graphrag_sdk/models/ollama.py

23-23: Do not perform function call GenerativeModelConfig in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test (openai/gpt-4o)
  • GitHub Check: test (gemini/gemini-2.0-flash)
🔇 Additional comments (12)
tests/test_kg.py (1)

71-71: LGTM: Model name updated for consistency.

The model name change from "gemini-1.5-flash-001" to "gemini-1.5-flash" aligns with the refactoring of GeminiGenerativeModel to use the unified LiteLLM backend.

tests/test_auto_create_ontology.py (1)

27-27: LGTM: Consistent model naming update.

The model name change matches the pattern established across other test files for the LiteLLM backend refactoring.

tests/test_ontology_from_kg.py (1)

58-58: LGTM: Model naming consistency maintained.

The model name update is consistent with the refactoring changes across the test suite.

tests/test_multi_agent.py (2)

9-9: LGTM: Import updated for unified LiteLLM backend.

The import change from GeminiGenerativeModel to LiteModel aligns with the refactoring to use the unified LiteLLM backend.


188-188: Verify the intentional model provider change.

The model has been changed from GeminiGenerativeModel("gemini-1.5-flash-001") to LiteModel("azure/o3-mini"), which represents a switch from Gemini to Azure OpenAI. While this aligns with testing the LiteModel directly, please confirm this provider change is intentional for this specific test case.

#!/bin/bash
# Description: Check if other tests also use different model providers or if this is unique
# Expected: Find patterns of model provider usage across test files

echo "=== Checking model instantiations across test files ==="
rg -A 2 -B 2 "(GeminiGenerativeModel|LiteModel|OpenAiGenerativeModel|AzureOpenAiGenerativeModel)" tests/
graphrag_sdk/model_config.py (1)

1-2: LGTM: Import statements updated for refactored model interfaces.

The imports are correctly updated to use the unified model interfaces.

graphrag_sdk/steps/extract_data_step.py (1)

190-190: Clean API simplification

The removal of the output_method parameter from all model calls aligns well with the unified LiteLLM backend approach and simplifies the API.

Also applies to: 197-197, 368-368

graphrag_sdk/models/model.py (1)

35-90: Excellent enhancement for flexible configuration

The addition of **kwargs support and dynamic attribute handling makes GenerativeModelConfig much more extensible. The serialization methods correctly handle both known and arbitrary parameters, which will be valuable for provider-specific configurations.

graphrag_sdk/models/litellm.py (2)

196-212: Well-designed parameter merging

The approach of merging generation_config and additional_params using dictionary unpacking is clean and allows for flexible parameter passing. The order ensures that additional_params can override default configuration values when needed.

Also applies to: 232-238


75-75: Verify parameter compatibility with LiteLLM

The change from max_tokens to max_completion_tokens could potentially break compatibility if LiteLLM doesn't support this parameter name across all providers.

Please verify that max_completion_tokens is supported by LiteLLM for all provider types:

Does LiteLLM support max_completion_tokens parameter for all providers like OpenAI, Azure OpenAI, Gemini, and Ollama?
graphrag_sdk/models/ollama.py (1)

37-69: Excellent refactoring to unified backend

The delegation pattern to LiteLLM is well-implemented:

  • Proper model name prefixing with "ollama/"
  • Host parameter correctly mapped to api_base
  • Clean property accessors maintain API compatibility
  • Chat session delegation removes code duplication

Also applies to: 128-177

graphrag_sdk/models/openai.py (1)

33-163: Clean implementation following unified pattern

After addressing the import and default argument issues, this implementation nicely mirrors the pattern used in other model classes, providing a consistent API while delegating to the unified LiteLLM backend.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
tests/test_multi_agent.py (1)

191-191: Fix line length violation.

The line exceeds the 79-character limit. Consider breaking it into multiple lines for better readability.

-        cls.model = LiteModel("azure/o3-mini", generation_config=GenerativeModelConfig(temperature=None))
+        cls.model = LiteModel(
+            "azure/o3-mini", 
+            generation_config=GenerativeModelConfig(temperature=None)
+        )
graphrag_sdk/model_config.py (1)

39-39: Fix line length violations for better readability.

Multiple lines exceed the 79-character limit. Consider breaking them for better code readability.

-        The extract_data model will be configured with temperature=0 and JSON response format
+        The extract_data model will be configured with temperature=0 and JSON 
+        response format
         for structured data extraction.
         
         Args:
             model (GenerativeModel): The generative model to use.
 
         Returns:
-            KnowledgeGraphModelConfig: The new KnowledgeGraphModelConfig instance.
+            KnowledgeGraphModelConfig: The new KnowledgeGraphModelConfig 
+                instance.
                 
-            # User explicitly set temperature (even if to None), only override if not None
+            # User explicitly set temperature (even if to None), 
+            # only override if not None
             if model.generation_config.temperature is not None:
             
-            # If user explicitly set temperature=None, respect that and don't override
+            # If user explicitly set temperature=None, respect that and 
+            # don't override

Also applies to: 46-46, 54-54, 57-57

graphrag_sdk/models/model.py (3)

38-46: Consider refactoring the constructor to reduce complexity.

The constructor has many parameters (7), which makes it complex to use and maintain. Consider using a builder pattern or converting to a dataclass for better ergonomics.

Alternative approach using a dataclass:

from dataclasses import dataclass, field
from typing import Any, Dict

@dataclass
class GenerativeModelConfig:
    temperature: Any = _TEMP_NOT_SET
    top_p: Optional[float] = None
    top_k: Optional[int] = None
    max_tokens: Optional[int] = None
    stop: Optional[list] = None
    response_format: Optional[dict] = None
    extra_params: Dict[str, Any] = field(default_factory=dict)
    _temperature_was_set: bool = field(init=False, default=False)
    
    def __post_init__(self):
        if self.temperature is self._TEMP_NOT_SET:
            self.temperature = None
            self._temperature_was_set = False
        else:
            self._temperature_was_set = True
            
        # Set extra parameters as attributes
        for key, value in self.extra_params.items():
            setattr(self, key, value)

14-15: Fix line length violations throughout the file.

Multiple lines exceed the 79-character limit. Break long lines for better readability.

-    This configuration follows OpenAI-style parameter naming but is designed to be 
-    compatible with other generative models. Supports both predefined and arbitrary parameters.
+    This configuration follows OpenAI-style parameter naming but is designed to be
+    compatible with other generative models. Supports both predefined and arbitrary
+    parameters.

-        return f"GenerativeModelConfig({', '.join(f'{k}={v}' for k, v in self.to_json().items())})"
+        items = ', '.join(f'{k}={v}' for k, v in self.to_json().items())
+        return f"GenerativeModelConfig({items})"

-        Serialize the configuration to a dictionary, excluding any fields with None values.
+        Serialize the configuration to a dictionary, excluding any fields with None
+        values.

-            dict: A dictionary containing only the parameters that are explicitly set 
+            dict: A dictionary containing only the parameters that are explicitly set
                 (i.e., not None).

-        return {k: v for k, v in self.__dict__.items() if v is not None and k != '_temperature_was_set'}
+        return {k: v for k, v in self.__dict__.items() 
+                if v is not None and k != '_temperature_was_set'}

-        # For other parameters, only pass if they exist in JSON to maintain default behavior
+        # For other parameters, only pass if they exist in JSON to maintain 
+        # default behavior

-        known_keys = {'temperature', 'top_p', 'top_k', 'max_tokens', 'stop', 'response_format'}
+        known_keys = {'temperature', 'top_p', 'top_k', 'max_tokens', 'stop', 
+                      'response_format'}

Also applies to: 68-68, 72-72, 75-75, 83-83, 91-91, 104-104


40-40: Fix parameter spacing formatting issue.

Remove unexpected spaces around the equals sign in the parameter definition.

-        temperature = _TEMP_NOT_SET,
+        temperature=_TEMP_NOT_SET,
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eedc491 and a821191.

📒 Files selected for processing (3)
  • graphrag_sdk/model_config.py (2 hunks)
  • graphrag_sdk/models/model.py (2 hunks)
  • tests/test_multi_agent.py (2 hunks)
🧰 Additional context used
🪛 Flake8 (7.2.0)
graphrag_sdk/model_config.py

[error] 39-39: line too long (93 > 79 characters)

(E501)


[error] 46-46: line too long (82 > 79 characters)

(E501)


[error] 54-54: line too long (90 > 79 characters)

(E501)


[error] 57-57: line too long (86 > 79 characters)

(E501)

graphrag_sdk/models/model.py

[error] 14-14: line too long (82 > 79 characters)

(E501)


[error] 15-15: line too long (95 > 79 characters)

(E501)


[error] 20-20: line too long (81 > 79 characters)

(E501)


[error] 40-40: unexpected spaces around keyword / parameter equals

(E251)


[error] 40-40: unexpected spaces around keyword / parameter equals

(E251)


[error] 67-67: too many blank lines (2)

(E303)


[error] 68-68: line too long (99 > 79 characters)

(E501)


[error] 72-72: line too long (91 > 79 characters)

(E501)


[error] 75-75: line too long (85 > 79 characters)

(E501)


[error] 83-83: line too long (104 > 79 characters)

(E501)


[error] 91-91: line too long (92 > 79 characters)

(E501)


[error] 104-104: line too long (95 > 79 characters)

(E501)

tests/test_multi_agent.py

[error] 191-191: line too long (105 > 79 characters)

(E501)

🪛 Pylint (3.3.7)
graphrag_sdk/models/model.py

[refactor] 38-38: Too many arguments (7/5)

(R0913)


[refactor] 38-38: Too many positional arguments (7/5)

(R0917)

🔇 Additional comments (10)
tests/test_multi_agent.py (1)

9-14: LGTM! Model abstraction refactoring looks good.

The migration from GeminiGenerativeModel to LiteModel aligns with the unified model abstraction approach described in the PR objectives. The additional imports for GenerativeModelConfig support the new configuration approach.

graphrag_sdk/model_config.py (3)

38-48: LGTM! Good documentation of the specialized configuration.

The docstring clearly explains that the extract_data model will be configured with specific parameters for structured data extraction. This helps users understand the behavior.


66-74: Good error handling with fallback pattern.

The try-catch block with fallback to the original model ensures robustness when model instantiation fails. This prevents breaking existing functionality.


49-75: Verify the temperature handling logic works correctly with GenerativeModelConfig.

The temperature handling logic is sophisticated and respects user intent, but needs verification that it works correctly with the new GenerativeModelConfig._temperature_was_set flag implementation.

#!/bin/bash
# Description: Verify that GenerativeModelConfig properly tracks temperature setting
# Expected: Find the _temperature_was_set implementation and sentinel pattern

ast-grep --pattern 'class GenerativeModelConfig {
  $$$
  _temperature_was_set = $$$
  $$$
}'

# Also check for the sentinel object usage
rg -A 5 "_TEMP_NOT_SET"
graphrag_sdk/models/model.py (6)

14-25: LGTM! Good documentation of the enhanced configuration capabilities.

The updated docstring clearly explains the OpenAI-style parameter naming and support for arbitrary parameters. The example demonstrates the usage well.


35-55: Clever sentinel pattern for tracking explicit parameter setting.

The use of a sentinel object to differentiate between unset and explicitly set to None is a robust solution. This enables the sophisticated temperature handling logic in KnowledgeGraphModelConfig.with_model.


62-64: Good use of dynamic attribute storage.

Using setattr to store arbitrary kwargs as instance attributes provides the flexibility needed for supporting various model-specific parameters.


70-83: Verify the serialization logic handles edge cases correctly.

The dynamic serialization approach looks good, but ensure it properly handles edge cases and doesn't inadvertently expose internal attributes.

# Test edge cases for the serialization logic
config1 = GenerativeModelConfig(temperature=0.5, custom_param="test")
print("Basic serialization:", config1.to_json())

config2 = GenerativeModelConfig(temperature=None)
print("Explicit None serialization:", config2.to_json())

config3 = GenerativeModelConfig()
print("Default serialization:", config3.to_json())

# Test round-trip serialization
json_data = config1.to_json()
config_restored = GenerativeModelConfig.from_json(json_data)
print("Round-trip test:", config_restored.to_json() == json_data)

86-107: Robust deserialization with parameter validation.

The conditional parameter passing ensures that only explicitly provided parameters are set, maintaining backward compatibility and default behavior.


132-132: LGTM! OutputMethod removal simplifies the interface.

Removing the output_method parameter from send_message aligns with the broader refactoring to eliminate the OutputMethod enum and simplifies the API.

@galshubeli galshubeli requested a review from swilly22 June 24, 2025 14:45
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
graphrag_sdk/models/azure_openai.py (1)

78-78: Fix invalid type hint: any should be Any.

The type hint any is not valid in Python. Use Any from the typing module.

-    def parse_generate_content_response(self, response: any) -> GenerationResponse:
+    def parse_generate_content_response(self, response: Any) -> GenerationResponse:
graphrag_sdk/models/gemini.py (1)

73-73: Fix invalid type hint: any should be Any.

The type hint any is not valid in Python. Use Any from the typing module.

-    def parse_generate_content_response(self, response: any) -> GenerationResponse:
+    def parse_generate_content_response(self, response: Any) -> GenerationResponse:
🧹 Nitpick comments (4)
README.md (1)

38-38: Fix punctuation for better readability.

Add a comma after "vendor" for improved grammar and readability.

-  To choose vendor use the prefix "specific_vendor/your_model", for example "openai/gpt-4.1".
+  To choose vendor, use the prefix "specific_vendor/your_model", for example "openai/gpt-4.1".
graphrag_sdk/model_config.py (1)

51-57: Fix formatting and approve deep copy approach.

The deep copy approach correctly prevents mutation of the original model when configuring the extract_data model. However, there are too many blank lines that should be cleaned up.

        extract_data_model.generation_config.response_format = {"type": "json_object"}

-        
        return KnowledgeGraphModelConfig(
graphrag_sdk/models/model.py (2)

41-41: Consider preserving type safety for the stop parameter.

Changing from Optional[list[str]] to Optional[list] reduces type safety. Since stop sequences are expected to be strings, maintaining the more specific type hint would help catch type errors at development time.

-        stop: Optional[list] = None,
+        stop: Optional[list[str]] = None,

56-57: Remove extra blank line.

-
-
-    def __str__(self) -> str:
+
+    def __str__(self) -> str:
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8e372b5 and 018a130.

📒 Files selected for processing (16)
  • .github/workflows/test.yml (2 hunks)
  • .wordlist.txt (1 hunks)
  • README.md (3 hunks)
  • examples/movies/demo-movies.ipynb (6 hunks)
  • examples/trip/demo_orchestrator_trip.ipynb (11 hunks)
  • examples/ufc/demo-ufc.ipynb (13 hunks)
  • examples/ufc/ontology.json (5 hunks)
  • graphrag_sdk/model_config.py (4 hunks)
  • graphrag_sdk/models/azure_openai.py (7 hunks)
  • graphrag_sdk/models/gemini.py (4 hunks)
  • graphrag_sdk/models/litellm.py (5 hunks)
  • graphrag_sdk/models/model.py (4 hunks)
  • graphrag_sdk/models/ollama.py (5 hunks)
  • graphrag_sdk/models/openai.py (6 hunks)
  • tests/test_rag.py (1 hunks)
  • tests/test_streaming_response.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .wordlist.txt
🚧 Files skipped from review as they are similar to previous changes (2)
  • .github/workflows/test.yml
  • graphrag_sdk/models/openai.py
🧰 Additional context used
🪛 LanguageTool
README.md

[uncategorized] ~38-~38: A comma might be missing here.
Context: ...eployment and use cases.
To choose vendor use the prefix "specific_vendor/your_mo...

(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)

🪛 Flake8 (7.2.0)
graphrag_sdk/model_config.py

[error] 56-56: too many blank lines (2)

(E303)

graphrag_sdk/models/model.py

[error] 57-57: too many blank lines (2)

(E303)

🪛 Ruff (0.11.9)
graphrag_sdk/models/litellm.py

57-59: Use a single if statement instead of nested if statements

(SIM102)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test (openai/gpt-4.1)
🔇 Additional comments (17)
tests/test_streaming_response.py (1)

93-93: LGTM! Model name update aligns with unified LiteLLM backend.

The change from "gemini/gemini-2.0-flash" to "openai/gpt-4.1" is consistent with the broader refactoring to use a unified LiteLLM backend across the codebase.

tests/test_rag.py (1)

115-115: LGTM! Consistent model name update.

The change from "gemini/gemini-2.0-flash" to "openai/gpt-4.1" maintains consistency with other test files and the unified LiteLLM backend approach.

README.md (1)

39-41: LGTM! Documentation updates align with unified LiteLLM backend.

The model name changes from "gemini/gemini-2.0-flash" to "openai/gpt-4.1" throughout the documentation are consistent with the codebase refactoring to use a unified LiteLLM backend.

Also applies to: 71-72, 146-146

examples/movies/demo-movies.ipynb (3)

63-64: LGTM! Environment setup updated for OpenAI integration.

The change from GEMINI_API_KEY to OPENAI_API_KEY correctly reflects the model switch to "openai/gpt-4.1" and ensures users configure the appropriate credentials.


66-66: LGTM! Typo correction improves documentation quality.

Good catch fixing "Falcor" to "Falkor" for accurate brand representation.


121-121: LGTM! Model name update maintains consistency.

The change to "openai/gpt-4.1" aligns with the unified LiteLLM backend approach used throughout the codebase.

graphrag_sdk/model_config.py (2)

1-1: LGTM! Import added for deep copy functionality.

The copy module import is necessary for the deep copy operation used to avoid mutating the original model.


30-31: LGTM! Extract data model configured for structured output.

Explicitly setting the response_format to JSON ensures structured data extraction, which aligns with the specialized model configuration requirements.

examples/trip/demo_orchestrator_trip.ipynb (1)

18-18: LGTM! Clean migration to LiteModel.

The migration from OpenAiGenerativeModel to LiteModel is implemented correctly, with appropriate model naming conventions (e.g., "o3-mini" for the orchestrator).

Also applies to: 51-51, 266-266, 407-407

examples/ufc/demo-ufc.ipynb (2)

44-44: LGTM! Consistent migration to LiteModel.

The changes correctly migrate from OpenAiGenerativeModel to LiteModel, maintaining consistency with other files in the PR.

Also applies to: 112-112


144-585: Verify compatibility of the significant ontology schema changes.

The ontology structure has changed significantly:

  • "Person" entity renamed to "Event"
  • "FightEvent" renamed to "Fight"
  • New entities added: Referee, FightStatistics, RoundStatistics, SignificantStrikeBreakdown
  • Relations have been redefined with new labels and attributes

These changes could be breaking for existing data or dependent systems.

Ensure that:

  1. Existing UFC data can be migrated to the new schema
  2. Any dependent code that relies on the old entity names is updated
  3. The ontology changes are documented in release notes
graphrag_sdk/models/ollama.py (1)

17-178: Excellent refactoring to use LiteModel as the backend.

The refactoring cleanly delegates all functionality to the internal LiteModel while maintaining backward compatibility:

  • The public API remains unchanged
  • Host parameter is correctly mapped to LiteLLM's api_base
  • Properties provide access to the original values
  • All chat session operations are properly delegated

This implementation successfully unifies the backend while preserving the Ollama-specific interface.

graphrag_sdk/models/azure_openai.py (2)

36-48: LGTM: Clean delegation to LiteModel backend.

The refactoring correctly delegates to the internal LiteModel with proper azure/ prefix while maintaining API compatibility through the original model name property.


132-172: LGTM: Consistent chat session delegation pattern.

The chat session correctly delegates all operations to the internal LiteLLM chat session, simplifying the implementation and removing manual history management.

graphrag_sdk/models/gemini.py (2)

33-44: LGTM: Consistent LiteModel integration with proper prefix.

The implementation correctly creates a LiteModel with the "gemini/" prefix and maintains the original model name for API compatibility.


118-167: LGTM: Clean chat session delegation.

The chat session implementation properly delegates all operations to the internal LiteLLM chat session, following the established pattern.

examples/ufc/ontology.json (1)

1-442: LGTM: Comprehensive domain model enhancement.

The ontology restructuring significantly improves the UFC domain representation with:

  • Semantic clarity (Fighter → Person, more descriptive attribute names)
  • Enhanced Fight entity with fight-specific attributes (title_bout, weight_class, etc.)
  • New detailed statistics entities (RoundStatistics, SignificantStrikeBreakdown)
  • Well-structured relations with appropriate attributes

The JSON structure is valid and the entity/relation definitions are logically consistent for UFC data modeling.

@galshubeli galshubeli linked an issue Jun 25, 2025 that may be closed by this pull request
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
graphrag_sdk/models/litellm.py (1)

67-70: Simplify nested if statements and document the special case.

The nested if statements can be combined for better readability, and the hardcoded temperature setting should be documented.

-        if self.model_name == "gpt-4.1":
-            # Set default temperature to 0 for gpt-4.1
-            if generation_config is None:
-                generation_config = GenerativeModelConfig(temperature=0)
+        # Set default temperature to 0 for gpt-4.1 for deterministic outputs
+        if self.model_name == "gpt-4.1" and generation_config is None:
+            generation_config = GenerativeModelConfig(temperature=0)
🧹 Nitpick comments (1)
graphrag_sdk/models/ollama.py (1)

13-13: Fix formatting: Add blank line before class definition.

Static analysis indicates missing blank line before the class definition.

+
 class OllamaGenerativeModel(LiteModel):
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 018a130 and 25f7e77.

📒 Files selected for processing (9)
  • .github/workflows/test.yml (3 hunks)
  • graphrag_sdk/models/azure_openai.py (5 hunks)
  • graphrag_sdk/models/gemini.py (2 hunks)
  • graphrag_sdk/models/litellm.py (6 hunks)
  • graphrag_sdk/models/model.py (2 hunks)
  • graphrag_sdk/models/ollama.py (3 hunks)
  • graphrag_sdk/models/openai.py (5 hunks)
  • pyproject.toml (3 hunks)
  • tests/test_multi_agent.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/test.yml
🧰 Additional context used
🪛 Flake8 (7.2.0)
graphrag_sdk/models/ollama.py

[error] 13-13: expected 2 blank lines, found 1

(E302)

graphrag_sdk/models/model.py

[error] 58-58: too many blank lines (2)

(E303)

tests/test_multi_agent.py

[error] 12-12: 'graphrag_sdk.GenerativeModelConfig' imported but unused

(F401)

🪛 Ruff (0.11.9)
graphrag_sdk/models/litellm.py

67-69: Use a single if statement instead of nested if statements

(SIM102)

tests/test_multi_agent.py

14-14: graphrag_sdk.GenerativeModelConfig imported but unused

Remove unused import: graphrag_sdk.GenerativeModelConfig

(F401)

🪛 Pylint (3.3.7)
graphrag_sdk/models/model.py

[refactor] 36-36: Too many arguments (7/5)

(R0913)


[refactor] 36-36: Too many positional arguments (7/5)

(R0917)

🔇 Additional comments (21)
tests/test_multi_agent.py (1)

9-9: LGTM! Clean transition to unified LiteLLM backend.

The refactoring from GeminiGenerativeModel to LiteModel aligns perfectly with the PR objectives to consolidate all generative models under a unified LiteLLM backend.

Also applies to: 191-191

pyproject.toml (2)

3-3: Appropriate version bump for the refactoring.

The minor version bump to 0.8.0 correctly reflects the significant architectural changes to use unified LiteLLM backend.


26-28: Excellent dependency restructuring for LiteLLM integration.

The promotion of litellm and ollama to core dependencies, combined with the fine-grained extras structure, provides users with flexible control over LiteLLM features while supporting the unified backend architecture.

Also applies to: 30-63

graphrag_sdk/models/model.py (4)

21-21: Good alignment with OpenAI parameter naming conventions.

The change from max_tokens to max_completion_tokens aligns with OpenAI's parameter naming and improves consistency across different LLM providers.

Also applies to: 41-41, 49-49


44-44: Excellent addition of flexible parameter support.

The **kwargs parameter and dynamic attribute storage allows the configuration to support arbitrary provider-specific parameters while maintaining type safety for common parameters.

Also applies to: 53-55


74-74: Clean and maintainable serialization methods.

The use of vars(self) in to_json() and the simplified from_json() method create a robust serialization pattern that automatically handles both predefined and dynamic attributes.

Also applies to: 78-79


103-103: Consistent removal of OutputMethod parameter.

The removal of the output_method parameter from the abstract send_message method aligns with the elimination of the OutputMethod enum throughout the codebase.

graphrag_sdk/models/openai.py (4)

22-22: Good fix for mutable default argument.

Changing the default value from GenerativeModelConfig() to None prevents the shared mutable state issue that was identified in past reviews.


33-42: Clever model name handling for LiteLLM compatibility.

The approach of internally prefixing with "openai/" for LiteLLM while exposing the original model name through a property maintains API compatibility while supporting the unified backend.

Also applies to: 44-47


68-68: Consistent serialization with clean model names.

Returning the original model name without the openai/ prefix in serialization maintains clean external API while using LiteLLM internally.


93-110: Excellent delegation to LiteModelChatSession.

The simplified OpenAiChatSession that inherits all functionality from LiteModelChatSession eliminates code duplication while maintaining the specific type for API compatibility.

graphrag_sdk/models/litellm.py (3)

5-11: Robust handling of optional Ollama dependency.

The try-catch import pattern with availability flag provides clean error handling when Ollama client is not installed.


89-89: Consistent parameter naming with OpenAI conventions.

The change from max_tokens to max_completion_tokens aligns with the parameter name updates throughout the codebase.


225-226: Clean parameter merging for LiteLLM calls.

The direct unpacking of generation_config.to_json() and additional_params provides a clean way to merge all parameters for the completion call without complex adjustment logic.

Also applies to: 251-252

graphrag_sdk/models/ollama.py (2)

13-61: Excellent refactoring to LiteModel architecture.

The conversion from direct Ollama client usage to LiteModel inheritance is well-implemented. The approach of converting model names to LiteLLM format (ollama/{model_name}) while preserving the original name through a property maintains backward compatibility and provides a clean abstraction.


107-124: Clean delegation to LiteModelChatSession.

The simplified chat session implementation that delegates all functionality to the parent class is a good approach that eliminates code duplication while maintaining the same interface.

graphrag_sdk/models/azure_openai.py (3)

11-96: Consistent refactoring implementation.

The refactoring to inherit from LiteModel follows the same clean pattern as the other model implementations. The Azure-specific model name conversion and parameter handling via **kwargs is well-implemented.


98-115: Proper delegation to parent chat session.

The simplified chat session implementation correctly delegates to LiteModelChatSession, maintaining consistency with the other model implementations.


22-22: Verify the default model name "gpt-4.1".

The default model name "gpt-4.1" doesn't appear to be a valid Azure OpenAI model name. Standard Azure OpenAI models typically use names like "gpt-4", "gpt-4-turbo", "gpt-35-turbo", etc.

What are the current available model names in Azure OpenAI Service?
graphrag_sdk/models/gemini.py (2)

10-91: Consistent and clean LiteModel integration.

The refactoring perfectly follows the established pattern from the other model implementations. The conversion to gemini/{model_name} format, preservation of the original model name via property, and delegation to LiteModel is implemented correctly and maintains backward compatibility.


94-111: Excellent simplification of chat session.

The chat session implementation correctly inherits from LiteModelChatSession and delegates all functionality to the parent class, eliminating code duplication while maintaining the same interface.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
README.md (5)

26-29: Clarify installation command

Add --upgrade or a version specifier so users with an older cached wheel don’t miss the new LiteLLM integration:

-pip install graphrag_sdk
+pip install --upgrade "graphrag_sdk>=0.6.0"

31-31: Drop the trailing period in the heading

markdownlint (MD026) flags punctuation at the end of a heading.
### Configure Credentials is sufficient.


34-38: Grammar & consistency tweaks

  1. Missing comma: “To choose a vendor, use the prefix …”.
  2. Keep the prefix pattern consistent across bullets. E.g.:
-* [Google](https://makersuite.google.com/app/apikey) Recommended model:`gemini-2.0-flash`
+* [Google](https://makersuite.google.com/app/apikey) Recommended model:`google/gemini-2.0-flash`

Please verify the exact strings LiteLLM expects for each provider.


67-68: Streamline the explanatory comment

-# Model - vendor: openai, model: gpt-4.1 -> openai/gpt-4.1
+# OpenAI GPT-4.1 model

The shorter comment conveys the same information without the arrow notation.


142-143: Avoid magic strings by reusing a constant

If the recommended model changes, you’ll need to update multiple snippets. Consider defining once:

MODEL_NAME = "openai/gpt-4.1"
model = LiteModel(model_name=MODEL_NAME)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25f7e77 and 6a0a6bd.

📒 Files selected for processing (2)
  • README.md (3 hunks)
  • pyproject.toml (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • pyproject.toml
🧰 Additional context used
🪛 LanguageTool
README.md

[uncategorized] ~34-~34: A comma might be missing here.
Context: ...eployment and use cases.
To choose vendor use the prefix "specific_vendor/your_mo...

(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)

🪛 markdownlint-cli2 (0.17.2)
README.md

31-31: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test (openai/gpt-4.1)

@galshubeli galshubeli requested a review from swilly22 June 25, 2025 11:28
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
graphrag_sdk/models/litellm.py (1)

67-71: Address the nested if statement issue from previous review.

This is a duplicate of the previous review comment that wasn't addressed. The nested if statements should be simplified and the special case documented.

Apply this diff to fix the nested if statements:

-        if self._internal_model_name == "gpt-4.1":
-            # Set default temperature to 0 for gpt-4.1
-            if generation_config is None:
-                generation_config = GenerativeModelConfig(temperature=0)
+        # Set default temperature to 0 for gpt-4.1 for deterministic outputs
+        if self._internal_model_name == "gpt-4.1" and generation_config is None:
+            generation_config = GenerativeModelConfig(temperature=0)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6a0a6bd and d0e5d78.

📒 Files selected for processing (2)
  • graphrag_sdk/models/litellm.py (6 hunks)
  • tests/test_multi_agent.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_multi_agent.py
🧰 Additional context used
🪛 Ruff (0.11.9)
graphrag_sdk/models/litellm.py

67-69: Use a single if statement instead of nested if statements

(SIM102)


125-125: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (11)
graphrag_sdk/models/litellm.py (11)

5-11: Good defensive programming with optional Ollama import.

The conditional import pattern with availability flag is well-implemented, allowing graceful fallback when Ollama is not installed.


34-34: Good addition of flexible parameter passing.

The additional_params parameter enables provider-specific configurations while maintaining a clean interface.


56-56: Proper separation of internal and external model names.

Using _internal_model_name to store the parsed model name while keeping the full provider/model format in self.model is a good design choice.


60-62: Robust error handling for optional dependency.

The availability check with descriptive error message guides users to install the required package.


74-75: Clean initialization of additional parameters.

The null-safe initialization of additional_params with empty dict fallback is well-implemented.


76-79: Well-designed property for model name access.

The property provides clean access to the internal model name while maintaining encapsulation.


94-94: Correct parameter name update.

The change from max_tokens to max_completion_tokens aligns with the updated GenerativeModelConfig interface.


215-215: Simplified method signature improves usability.

Removing the output_method parameter simplifies the interface while maintaining functionality through flexible parameter passing.


230-232: Clean parameter merging with proper precedence.

The parameter merging allows additional_params to override generation config values, providing flexibility while maintaining backward compatibility.


256-258: Consistent parameter handling in streaming method.

The streaming method uses the same parameter merging pattern as the non-streaming version, ensuring consistency.


31-31: [web_search]

What GPT-4 model identifiers does the OpenAI API currently support, and is "gpt-4.1" included among them?

@galshubeli galshubeli merged commit 191fab9 into main Jun 25, 2025
6 checks passed
@galshubeli galshubeli deleted the fix-config branch June 25, 2025 13:07
@coderabbitai coderabbitai bot mentioned this pull request Aug 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Integration] Config & Provider Fixes

3 participants