-
-
Notifications
You must be signed in to change notification settings - Fork 14
feat:OpenAPI updates: nullable/oneOf, think/truncate/shift, remote metadata #144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughExpanded OpenAPI schemas in src/libs/Ollama/openapi.yaml: added nullable and oneOf options for request fields, introduced think/truncate/shift controls, made CreateModelRequest.modelfile optional, added remote_model/remote_host metadata across responses and model schemas, and added index/tool_name fields to tool-related structures. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Client
participant API as Ollama API
participant RM as Remote Model (optional)
rect rgb(240,248,255)
note over C,API: Chat completion with think/truncate/shift controls
C->>API: POST /chat/completions {think, truncate?, shift?, format?}
alt think=true or level
API->>API: Enable reasoning mode (per level)
else think unset/false
API->>API: Standard generation
end
opt shift=true
API->>API: Adjust context window (shift)
end
opt format=json or JSON Schema
API->>API: Constrain output to format/schema
end
end
rect rgb(245,255,240)
alt remote_model configured
API->>RM: Proxy/route request
RM-->>API: Generated content + metadata
API-->>C: Response {remote_model, remote_host}
else local model
API-->>C: Response {remote_model: null, remote_host: null}
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/libs/Ollama/openapi.yaml (4)
313-330: Unify think/truncate/shift and clarify behavior when combined
- Extract a shared components schema (e.g., ThinkOption) and reference it in both request types.
- Document precedence: when truncate and shift are both true, which applies first? What happens on overflow?
Proposed shared schema (add under components.schemas):
ThinkOption: oneOf: - type: boolean - type: string enum: [high, medium, low] description: Controls thinking behavior.Then replace think with:
think: $ref: '#/components/schemas/ThinkOption' nullable: truePlease also add a short note in each description clarifying truncate vs shift precedence.
Also applies to: 594-611
716-719: Tool messages: consider making tool_name required when role = "tool"OAS 3.0 can’t express conditional required directly; you can approximate with oneOf variants (tool vs non-tool) or keep as-is and document it clearly.
769-772: index placement: put index on ToolCall (list element), not ToolCallFunction; summary inconsistency
- The description says “index of the tool call in the list of tool calls”, so it belongs to ToolCall, not ToolCallFunction.
- AI summary mentions ToolCallFunctionArgs.index was added, but it’s not present in code.
Apply these diffs:
@@ components.schemas.ToolCall.properties function: $ref: '#/components/schemas/ToolCallFunction' + index: + type: integer + description: The index of the tool call in the list of tool calls@@ components.schemas.ToolCallFunction.properties - index: - type: integer - description: The index of the tool call in the list of tool callsIf you do need per-argument indexing, add it under ToolCallFunctionArgs with a clear description; otherwise omit to avoid confusion.
Also applies to: 774-776
289-299: Deduplicate format schema via $ref
Replace the inlineoneOfin bothGenerateCompletionRequest.properties.formatandGenerateChatCompletionRequest.properties.formatwith anallOfpointing to#/components/schemas/ResponseFormat, preservingnullableand simplifying the description.@@ components.schemas.GenerateCompletionRequest.properties - format: - oneOf: - - enum: - - json - type: string - description: Enable JSON mode - default: json - - type: object - description: JSON schema object for structured output validation - description: "The format to return a response in. Can be:\n- \"json\" string to enable JSON mode\n- JSON schema object for structured output validation\n" - nullable: true + format: + allOf: + - $ref: '#/components/schemas/ResponseFormat' + nullable: true + description: "The format to return a response in."@@ components.schemas.GenerateChatCompletionRequest.properties - format: - oneOf: - - enum: - - json - type: string - description: Enable JSON mode - default: json - - type: object - description: JSON schema object for structured output validation - description: "The format to return a response in. Can be:\n- \"json\" string to enable JSON mode\n- JSON schema object for structured output validation\n" - nullable: true + format: + allOf: + - $ref: '#/components/schemas/ResponseFormat' + nullable: true + description: "The format to return a response in."
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (42)
src/libs/Ollama/Generated/Ollama..JsonSerializerContext.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.ChatClient.GenerateChatCompletion.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.CompletionsClient.GenerateCompletion.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.IChatClient.GenerateChatCompletion.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.ICompletionsClient.GenerateCompletion.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.IModelsClient.CreateModel.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateChatCompletionRequestFormatEnum.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateChatCompletionRequestFormatEnumNullable.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateChatCompletionRequestThink.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateChatCompletionRequestThinkNullable.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateCompletionRequestFormatEnum.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateCompletionRequestFormatEnumNullable.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateCompletionRequestThink.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.GenerateCompletionRequestThinkNullable.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonConverters.OneOf2.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.JsonSerializerContextTypes.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.CreateModelRequest.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.CreateModelRequestAdapters.Json.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.CreateModelRequestAdapters.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.CreateModelRequestFiles.Json.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.CreateModelRequestFiles.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.CreateModelRequestParameters.Json.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.CreateModelRequestParameters.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateChatCompletionRequest.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateChatCompletionRequestFormatEnum.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateChatCompletionRequestFormatEnum2.Json.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateChatCompletionRequestFormatEnum2.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateChatCompletionRequestThink.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateChatCompletionResponse.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateCompletionRequest.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateCompletionRequestFormatEnum.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateCompletionRequestFormatEnum2.Json.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateCompletionRequestFormatEnum2.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateCompletionRequestThink.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.GenerateCompletionResponse.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.Message.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.Model.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.ModelInfo.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.Models.ToolCallFunction.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/Ollama.ModelsClient.CreateModel.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/OneOf.2.Json.g.csis excluded by!**/generated/**src/libs/Ollama/Generated/OneOf.2.g.csis excluded by!**/generated/**
📒 Files selected for processing (1)
src/libs/Ollama/openapi.yaml(12 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test / Build, test and publish
🔇 Additional comments (2)
src/libs/Ollama/openapi.yaml (2)
823-823: Make modelfile optional: LGTMThis aligns with remote create and blob-based workflows.
836-840: Avoid reserved property namefrom
Rename the property tosource(orfrom_model) to prevent codegen issues in languages wherefromis reserved (Python, C#, Kotlin).- from: + source:If renaming isn’t feasible now, document this in README/SDKs and validate with common generators.
| remote_model: | ||
| type: string | ||
| description: Name of the upstream remote model that generated the response (when using federated models) | ||
| nullable: true | ||
| remote_host: | ||
| type: string | ||
| description: URL of the upstream Ollama host that served the request (when proxying to remote instances) | ||
| nullable: true | ||
| description: The response class for the generate endpoint. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Potential leakage of internal hostnames via remote_host; mark readOnly and consider redaction
Exposing remote_host may leak internal topology. Mark fields readOnly and consider returning sanitized hostnames or omit by default.
Apply readOnly to all remote_* fields:
@@ components.schemas.GenerateCompletionResponse.properties
remote_model:
type: string
description: Name of the upstream remote model that generated the response (when using federated models)
nullable: true
+ readOnly: true
remote_host:
type: string
description: URL of the upstream Ollama host that served the request (when proxying to remote instances)
nullable: true
+ readOnly: true@@ components.schemas.GenerateChatCompletionResponse.properties
remote_model:
type: string
description: Name of the upstream remote model that generated the response (when using federated models)
nullable: true
+ readOnly: true
remote_host:
type: string
description: URL of the upstream Ollama host that served the request (when proxying to remote instances)
nullable: true
+ readOnly: true@@ components.schemas.Model.properties
remote_model:
type: string
description: Name of the upstream remote model (when model is federated from another Ollama instance)
nullable: true
+ readOnly: true
remote_host:
type: string
description: URL of the upstream Ollama host (when model is federated from another instance)
nullable: true
+ readOnly: true@@ components.schemas.ModelInfo.properties
remote_model:
type: string
description: Name of the upstream remote model (when model is federated from another Ollama instance)
nullable: true
+ readOnly: true
remote_host:
type: string
description: URL of the upstream Ollama host (when model is federated from another instance)
nullable: true
+ readOnly: trueAlso confirm these are not logged or echoed back to multi-tenant clients by default.
Also applies to: 665-673, 912-920, 1095-1102
🏁 Script executed:
#!/bin/bash
# Locate the OpenAPI spec file
fd --hidden --ignore-case openapi.yaml
# Inspect GenerateCompletionResponse around lines 543-551
sed -n '530,580p' src/libs/Ollama/openapi.yaml
# Inspect GenerateChatCompletionResponse around lines 665-673
sed -n '650,700p' src/libs/Ollama/openapi.yaml
# Inspect Model schema around lines 912-920
sed -n '900,950p' src/libs/Ollama/openapi.yaml
# Inspect ModelInfo schema around lines 1095-1102
sed -n '1080,1120p' src/libs/Ollama/openapi.yamlLength of output: 6940
🏁 Script executed:
#!/bin/bash
grep -R "GenerateChatCompletionResponse" -n src/libs/Ollama/openapi.yaml || echo "Not found"Length of output: 118
🏁 Script executed:
#!/bin/bash
# Inspect GenerateChatCompletionResponse around its definition
sed -n '595,650p' src/libs/Ollama/openapi.yaml
# Search for a 'Model' component schema
grep -nP "^[[:space:]]*Model:$" src/libs/Ollama/openapi.yaml || echo "Model schema not found"Length of output: 2190
🏁 Script executed:
#!/bin/bash
# Inspect 'Model' schema around its definition
sed -n '880,940p' src/libs/Ollama/openapi.yaml
# Inspect 'ModelInformation' schema around its definition
sed -n '940,1020p' src/libs/Ollama/openapi.yamlLength of output: 4780
🏁 Script executed:
#!/bin/bash
# List all occurrences of remote_model for context
grep -R -n "remote_model:" src/libs/Ollama/openapi.yaml
# Inspect GenerateChatCompletionResponse for remote fields
sed -n '650,740p' src/libs/Ollama/openapi.yamlLength of output: 3209
Add readOnly to actual remote_ fields*
The remote_model and remote_host properties exist in these schemas—GenerateCompletionResponse, GenerateChatCompletionRequest, Message, and Model—and should include
readOnly: trueAlso sanitize or omit remote_host by default to avoid leaking internal hostnames.
🤖 Prompt for AI Agents
In src/libs/Ollama/openapi.yaml around lines 543 to 551, the remote_model and
remote_host properties must be marked as response-only and not client-settable:
add "readOnly: true" to both properties in all affected schemas
(GenerateCompletionResponse, GenerateChatCompletionRequest, Message, and Model)
so they are documented as server-generated; additionally sanitize or omit
remote_host by default to avoid leaking internal hostnames — either remove it
from default responses or set it to null/empty by default and ensure any
returned value is sanitized (no internal hostnames) before emitting.
Summary by CodeRabbit