Skip to content

Commit 5509dc0

Browse files
Rajakavitha1Rakavitha Kodhandapanilc525
authored
fix(docs) Updated the name from v2 to OIP (SeldonIO#6030)
* updated the name from v2 to OIP * Update doc/source/analytics/explainers.md Co-authored-by: Lucian Carata <[email protected]> * Update doc/source/examples/notebooks.rst Co-authored-by: Lucian Carata <[email protected]> * Update doc/source/examples/notebooks.rst Co-authored-by: Lucian Carata <[email protected]> --------- Co-authored-by: Rakavitha Kodhandapani <[email protected]> Co-authored-by: Lucian Carata <[email protected]>
1 parent f993cbb commit 5509dc0

File tree

9 files changed

+25
-27
lines changed

9 files changed

+25
-27
lines changed

doc/source/analytics/explainers.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
Seldon provides model explanations using its [Alibi](https://github.com/SeldonIO/alibi) library.
77

8-
We support explainers saved using python 3.7 in v1 explainer server. However, for v2 protocol (using MLServer) this is not a requirement anymore.
8+
The v1 explainer server supports explainers saved with Python 3.7. However, for the Open Inference Protocol (or V2 protocol) using MLServer, this requirement is no longer necessary.
99

1010
| Package | Version |
1111
| ------ | ----- |
@@ -36,9 +36,9 @@ For Alibi explainers that need to be trained you should
3636

3737
The runtime environment in our [Alibi Explain Server](https://github.com/SeldonIO/seldon-core/tree/master/components/alibi-explain-server) is locked using [Poetry](https://python-poetry.org/). See our e2e example [here](../examples/iris_explainer_poetry.html) on how to use that definition to train your explainers.
3838

39-
### V2 protocol for explainer using [MLServer](https://github.com/SeldonIO/MLServer) (incubating)
39+
### Open Inference Protocol for explainer using [MLServer](https://github.com/SeldonIO/MLServer)
4040

41-
The support for v2 protocol is now handled with MLServer moving forward. This is experimental
41+
The support for Open Inference Protocol is now handled with MLServer moving forward. This is experimental
4242
and only works for black-box explainers.
4343

4444
For an e2e example, please check AnchorTabular notebook [here](../examples/iris_anchor_tabular_explainer_v2.html).
@@ -82,7 +82,7 @@ If you were port forwarding to Ambassador or istio on localhost:8003 then the AP
8282
http://localhost:8003/seldon/seldon/income-explainer/default/api/v1.0/explain
8383
```
8484

85-
The explain method is also supported for tensorflow and v2 protocols. The full list of endpoint URIs is:
85+
The explain method is also supported for tensorflow and Open Inference protocols. The full list of endpoint URIs is:
8686

8787
| Protocol | URI |
8888
| ------ | ----- |

doc/source/examples/notebooks.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Prepackaged Inference Server Examples
2222
Deploy a Scikit-learn Model Binary <../servers/sklearn.md>
2323
Deploy a Tensorflow Exported Model <../servers/tensorflow.md>
2424
MLflow Pre-packaged Model Server A/B Test <mlflow_server_ab_test_ambassador>
25-
MLflow v2 Protocol End to End Workflow (Incubating) <mlflow_v2_protocol_end_to_end>
25+
MLflow Open Inference Protocol End to End Workflow <mlflow_v2_protocol_end_to_end>
2626
Deploy a XGBoost Model Binary <../servers/xgboost.md>
2727
Deploy Pre-packaged Model Server with Cluster's MinIO <minio-sklearn>
2828
Custom Pre-packaged LightGBM Server <custom_server>
@@ -90,7 +90,7 @@ Advanced Machine Learning Monitoring
9090

9191
Real Time Monitoring of Statistical Metrics <feedback_reward_custom_metrics>
9292
Model Explainer Example <iris_explainer_poetry>
93-
Model Explainer V2 protocol Example (Incubating) <iris_anchor_tabular_explainer_v2>
93+
Model Explainer Open Inference Protocol Example <iris_anchor_tabular_explainer_v2>
9494
Outlier Detection on CIFAR10 <outlier_cifar10>
9595
Training Outlier Detector for CIFAR10 with Poetry <cifar10_od_poetry>
9696

@@ -155,7 +155,7 @@ Complex Graph Examples
155155
:titlesonly:
156156

157157
Chainer MNIST <chainer_mnist>
158-
Custom pre-processors with the V2 Protocol <transformers-v2-protocol>
158+
Custom pre-processors with the Open Inference Protocol <transformers-v2-protocol>
159159
graph-examples <graph-examples>
160160

161161
Ingress

doc/source/graph/protocols.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Seldon Core supports the following data planes:
66

77
* [REST and gRPC Seldon protocol](#rest-and-grpc-seldon-protocol)
88
* [REST and gRPC Tensorflow Serving Protocol](#rest-and-grpc-tensorflow-protocol)
9-
* [REST and gRPC V2 Protocol](#v2-protocol)
9+
* [REST and gRPC Open Inference Protocol](#v2-protocol)
1010

1111
## REST and gRPC Seldon Protocol
1212

@@ -40,7 +40,7 @@ General considerations:
4040
* The name of the model in the `graph` section of the SeldonDeployment spec must match the name of the model loaded onto the Tensorflow Server.
4141

4242

43-
## V2 Protocol
43+
## Open Inference Protocol (or V2 protocol)
4444

4545
Seldon has collaborated with the [NVIDIA Triton Server
4646
Project](https://github.com/triton-inference-server/server) and the [KServe

doc/source/graph/svcorch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ At present, we support the following protocols:
1717
| --- | --- | --- | --- |
1818
| Seldon | `seldon` | [OpenAPI spec for Seldon](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/openapi.html) |
1919
| Tensorflow | `tensorflow` | [REST API](https://www.tensorflow.org/tfx/serving/api_rest) and [gRPC API](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/prediction_service.proto) reference |
20-
| V2 | `v2` | [V2 Protocol Reference](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/v2-protocol.html) |
20+
| V2 | `v2` | [Open Inference Protocol Reference](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/v2-protocol.html) |
2121

2222
These protocols are supported by some of our pre-packaged servers out of the
2323
box.

doc/source/reference/release-1.6.0.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,8 @@ This will also help remove any ambiguity around what component we refer to when
8080
8181
* Seldon Operator now runs as non-root by default (with Security context override available)
8282
* Resolved PyYAML CVE from Python base image
83-
* Added support for V2 Protocol in outlier and drift detectors
84-
* Handling V2 Protocol in request logger
83+
* Added support for Open Inference Protocol (or V2 protocol) in outlier and drift detectors
84+
* Handling Open Inference Protocol in request logger
8585
8686
8787

doc/source/reference/upgrading.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ Only the v1 versions of the CRD will be supported moving forward. The v1beta1 ve
9595
9696
We have updated the health checks done by Seldon for the model nodes in your inference graph. If `executor.fullHealthChecks` is set to true then:
9797
* For Seldon protocol each node will be probed with `/api/v1.0/health/status`.
98-
* For the v2 protocol each node will be probed with `/v2/health/ready`.
98+
* For the Open Inference Protocol (or V2 protocol) each node will be probed with `/v2/health/ready`.
9999
* For tensorflow just TCP checks will be run on the http endpoint.
100100

101101
By default we have set `executor.fullHealthChecks` to false for 1.14 release as users would need to rebuild their custom python models if they have not implemented the `health_status` method. In future we will default to `true`.

doc/source/servers/mlflow.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -85,10 +85,9 @@ notebook](../examples/server_examples.html#Serve-MLflow-Elasticnet-Wines-Model)
8585
or check our [talk at the Spark + AI Summit
8686
2019](https://www.youtube.com/watch?v=D6eSfd9w9eA).
8787

88-
## V2 protocol
88+
## Open Inference Protocol (or V2 protocol)
8989

90-
The MLFlow server can also be used to expose an API compatible with the [V2
91-
Protocol](../graph/protocols.md#v2-protocol).
90+
The MLFlow server can also be used to expose an API compatible with the [Open Inference Protocol](../graph/protocols.md#v2-protocol).
9291
Note that, under the hood, it will use the [Seldon
9392
MLServer](https://github.com/SeldonIO/MLServer) runtime.
9493

@@ -136,7 +135,7 @@ $ gsutil cp -r ../model gs://seldon-models/test/elasticnet_wine_<uuid>
136135
```
137136

138137
- deploy the model to seldon-core
139-
In order to enable support for the V2 protocol, it's enough to
138+
In order to enable support for the Open Inference Protocol, it's enough to
140139
specify the `protocol` of the `SeldonDeployment` to use `v2`.
141140
For example,
142141

@@ -146,7 +145,7 @@ kind: SeldonDeployment
146145
metadata:
147146
name: mlflow
148147
spec:
149-
protocol: v2 # Activate the v2 protocol
148+
protocol: v2 # Activate the Open Inference Protocol
150149
name: wines
151150
predictors:
152151
- graph:

doc/source/servers/sklearn.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -82,13 +82,13 @@ Acceptable values for the `method` parameter are `predict`, `predict_proba`,
8282
`decision_function`.
8383

8484

85-
## V2 protocol
85+
## Open Inference Protocol (or V2 protocol)
8686

87-
The SKLearn server can also be used to expose an API compatible with the [V2 Protocol](../graph/protocols.md#v2-protocol).
87+
The SKLearn server can also be used to expose an API compatible with the [Open Inference Protocol](../graph/protocols.md#v2-protocol).
8888
Note that, under the hood, it will use the [Seldon
8989
MLServer](https://github.com/SeldonIO/MLServer) runtime.
9090

91-
In order to enable support for the V2 protocol, it's enough to
91+
In order to enable support for the Open Inference Protocol it's enough to
9292
specify the `protocol` of the `SeldonDeployment` to use `v2`.
9393
For example,
9494

@@ -99,7 +99,7 @@ metadata:
9999
name: sklearn
100100
spec:
101101
name: iris-predict
102-
protocol: v2 # Activate the V2 protocol
102+
protocol: v2 # Activate the Open Inference Protocol
103103
predictors:
104104
- graph:
105105
children: []

doc/source/servers/xgboost.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -46,14 +46,13 @@ spec:
4646
You can try out a [worked notebook](../examples/server_examples.html) with a
4747
similar example.
4848
49-
## V2 protocol
49+
## Open Inference Protocol (or V2 protocol)
5050
51-
The XGBoost server can also be used to expose an API compatible with the [V2
52-
protocol](../graph/protocols.md#v2-protocol).
51+
The XGBoost server can also be used to expose an API compatible with the [Open Inference Protocol](../graph/protocols.md#v2-protocol).
5352
Note that, under the hood, it will use the [Seldon
5453
MLServer](https://github.com/SeldonIO/MLServer) runtime.
5554
56-
In order to enable support for the V2 protocol, it's enough to
55+
In order to enable support for the Open Inference Protocol, it's enough to
5756
specify the `protocol` of the `SeldonDeployment` to use `v2`.
5857
For example,
5958

@@ -64,7 +63,7 @@ metadata:
6463
name: xgboost
6564
spec:
6665
name: iris
67-
protocol: v2 # Activate the V2 protocol
66+
protocol: v2 # Activate the Open Inference Protocol
6867
predictors:
6968
- graph:
7069
children: []

0 commit comments

Comments
 (0)