-
Notifications
You must be signed in to change notification settings - Fork 61
avoid deepcopy exception with OllamaGenerativeModel #128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a fallback in KnowledgeGraphModelConfig.with_model: if deepcopy of a model fails and an Ollama client is present, it reconstructs the model via OllamaGenerativeModel.from_json(model.to_json()), then sets response_format to {"type": "json_object"} before returning the config. No public API signatures changed. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Caller
participant K as KnowledgeGraphModelConfig
participant M as Model
participant O as OllamaGenerativeModel (lazy)
Caller->>K: with_model(model)
Note over K: Try to deepcopy model
K->>M: copy.deepcopy(model)
alt deepcopy succeeds
K->>K: set generation_config.response_format = {"type":"json_object"}
K-->>Caller: return updated config
else deepcopy raises TypeError
alt model has ollama_client
Note over K,O: Lazy import and reconstruct from JSON
K->>M: model.to_json()
K->>O: OllamaGenerativeModel.from_json(json)
K->>K: set generation_config.response_format = {"type":"json_object"}
K-->>Caller: return updated config
else no ollama_client
K-->>Caller: re-raise TypeError
end
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
graphrag_sdk/model_config.py (3)
52-59: Preserve traceback; fix Ruff TRY201; prefer absolute import.
- Use bare
raiseto keep the original stack trace.- Use absolute import for consistency with the module’s other imports.
Apply:
- except TypeError as te: - if getattr(model, "ollama_client", None) is not None: - from .models.ollama import OllamaGenerativeModel + except TypeError: + if getattr(model, "ollama_client", None) is not None: + from graphrag_sdk.models.ollama import OllamaGenerativeModel extract_data_model = OllamaGenerativeModel.from_json(model.to_json()) else: - raise te + raise
52-59: Prefer type check over attribute sentinel.Duck-typing on
ollama_clientrisks false positives. After the lazy import, checkisinstance(model, OllamaGenerativeModel)and otherwise re-raise.Example:
- except TypeError: - if getattr(model, "ollama_client", None) is not None: - from graphrag_sdk.models.ollama import OllamaGenerativeModel - extract_data_model = OllamaGenerativeModel.from_json(model.to_json()) - else: - raise + except TypeError: + from graphrag_sdk.models.ollama import OllamaGenerativeModel + if isinstance(model, OllamaGenerativeModel): + extract_data_model = OllamaGenerativeModel.from_json(model.to_json()) + else: + raise
60-60: Confirm provider support for response_format=json_object.If
modelis Ollama-based, ensure this flag is honored or gracefully ignored; otherwise extraction may fail. Add a small integration test that callswith_model()on a remote Ollama model and executes one extraction turn.I can draft a minimal test exercising remote host preservation and
with_model()behavior if helpful.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
graphrag_sdk/model_config.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
graphrag_sdk/model_config.py (1)
graphrag_sdk/models/ollama.py (3)
OllamaGenerativeModel(13-104)from_json(88-104)to_json(74-85)
🪛 Ruff (0.12.2)
graphrag_sdk/model_config.py
59-59: Use raise without specifying exception name
Remove exception name
(TRY201)
Closes #123
Summary by CodeRabbit
Bug Fixes
Chores