Skip to content

Commit ce63003

Browse files
committed
Update 2022-07-21-KServe-0.9-release.md
Signed-off-by: Dan Sun <[email protected]>
1 parent 3f7ebd0 commit ce63003

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

docs/blog/articles/2022-07-21-KServe-0.9-release.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ KServe has the unique strength to build the distributed inference graph with its
1717
The graph router is deployed behind an HTTP endpoint and can be scaled dynamically based on request volume. The InferenceGraph supports four different types of routing nodes: **Sequence**, **Switch**, **Ensemble**, **Splitter**.
1818

1919

20-
- **Sequence Node**: It allows users to define multiple Steps with InferenceServices or Nodes as routing targets in a sequence. The Steps are executed in sequence and the request/response from the previous step and be passed to the next step as input based on configuration.
21-
- **Switch Node**: It allows users to define routing conditions and select a step to execute if it matches the condition. The response is returned as soon as it finds the first step that matches the condition. If no condition is matched, the graph returns the original request.
20+
- **Sequence Node**: It allows users to define multiple `Steps` with `InferenceServices` or `Nodes` as routing targets in a sequence. The `Steps` are executed in sequence and the request/response from the previous step and be passed to the next step as input based on configuration.
21+
- **Switch Node**: It allows users to define routing conditions and select a `Step` to execute if it matches the condition. The response is returned as soon as it finds the first step that matches the condition. If no condition is matched, the graph returns the original request.
2222
- **Ensemble Node**: A model ensemble requires scoring each model separately and then combines the results into a single prediction response. You can then use different combination methods to produce the final result. Multiple classification trees, for example, are commonly combined using a "majority vote" method. Multiple regression trees are often combined using various averaging techniques.
2323
- **Splitter Node**: It allows users to split the traffic to multiple targets using a weighted distribution.
2424

@@ -63,7 +63,8 @@ spec:
6363
data: $request
6464
condition: "[@this].#(predictions.0==\"dog\")"
6565
```
66-
Currently the `Serverless` deployment mode is supported with inference graphs. You can try it out following the [tutorial](https://kserve.github.io/website/master/modelserving/inference_graph/image_pipeline/) here.
66+
67+
Currently `InferenceGraph` is supported with the `Serverless` deployment mode. You can try it out following the [tutorial](https://kserve.github.io/website/master/modelserving/inference_graph/image_pipeline/).
6768

6869

6970
## InferenceService API for ModelMesh
@@ -83,7 +84,7 @@ storage:
8384
parameters: # Parameters to override the default values inside the common secret.
8485
bucket: example-models
8586
```
86-
Learn more [here](https://kserve.github.io/website/master/modelserving/inference_graph/image_pipeline/).
87+
Learn more [here](https://github.com/kserve/kserve/tree/release-0.9/docs/samples/storage/storageSpec).
8788

8889

8990

@@ -122,10 +123,10 @@ spec:
122123

123124
## Other New Features:
124125

125-
- Support serving MLFlow model via MLServer serving runtime.
126-
- Support unified autoscaling target and metric fields for InferenceService components with both Serverless and RawDeployment mode.
127-
- Support InferenceService ingress class and url domain template configuration for RawDeployment mode.
128-
- ModelMesh now has a default OpenVINO Model Server ServingRuntime.
126+
- Support [serving MLFlow model format](https://kserve.github.io/website/0.9/modelserving/v1beta1/mlflow/v2/) via MLServer serving runtime.
127+
- Support [unified autoscaling target and metric fields](https://kserve.github.io/website/0.9/modelserving/autoscaling/autoscaling/) for InferenceService components with both Serverless and RawDeployment mode.
128+
- Support [InferenceService ingress class and url domain template configuration](https://kserve.github.io/website/0.9/admin/kubernetes_deployment/) for RawDeployment mode.
129+
- ModelMesh now has a default [OpenVINO Model Server](https://github.com/openvinotoolkit/model_server) ServingRuntime.
129130

130131

131132
## What’s Changed?
@@ -136,14 +137,13 @@ spec:
136137
- Update MLServer serving runtime to 1.0.0
137138

138139

139-
140140
## Join the community
141141

142142
- Visit our [Website](https://kserve.github.io/website/) or [GitHub](https://github.com/kserve)
143143
- Join the Slack ([#kserve](https://kubeflow.slack.com/join/shared_invite/zt-n73pfj05-l206djXlXk5qdQKs4o1Zkg#/))
144144
- Attend our community meeting by subscribing to the [KServe calendar](https://wiki.lfaidata.foundation/display/kserve/calendars).
145145
- View our [community github repository](https://github.com/kserve/community) to learn how to make contributions. We are excited to work with you to make KServe better and promote its adoption!
146146

147-
Thank you for using or checking out KServe!
147+
Thank you for contributing or checking out KServe!
148148

149149
– The KServe Working Group

0 commit comments

Comments
 (0)