Skip to content

Commit 609b830

Browse files
authored
1 parent 474954c commit 609b830

File tree

2 files changed

+14
-5
lines changed

2 files changed

+14
-5
lines changed

docs/source/contents/examples/batch-examples-k8s.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
# Batch Inference Examples
22

33
Requires `mlserver` to be installed.
4+
5+
```{warning}
6+
Deprecated: The MLServer CLI `infer` feature is experimental and will be removed in future work.
7+
```
8+
49
```bash
510
pip install mlserver
611
```

docs/source/contents/examples/batch-examples-local.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
# Local Batch Inference Example
22

3-
This example runs you through a series of batch inference requests made to both models and pipelines running on Seldon Core locally.
3+
This example runs you through a series of batch inference requests made to both models and pipelines running on Seldon Core locally.
4+
5+
```{warning}
6+
Deprecated: The MLServer CLI `infer` feature is experimental and will be removed in future work.
7+
```
48

59
## Setup
610

@@ -47,7 +51,7 @@ seldon model load -f models/sklearn-iris-gs.yaml
4751

4852
### Deploy the Iris Pipeline
4953

50-
Now that we've deployed our iris model, let's create a [pipeline](../pipelines/index) around the model.
54+
Now that we've deployed our iris model, let's create a [pipeline](../pipelines/index) around the model.
5155

5256
```bash
5357
cat pipelines/iris.yaml
@@ -173,7 +177,7 @@ seldon model infer iris '{"inputs": [{"name": "predict", "shape": [1, 4], "datat
173177
174178
```
175179

176-
The preidiction request body needs to be an [Open Inference Protocol](../apis/inference/v2.md) compatible payload and also match the expected inputs for the model you've deployed. In this case, the iris model expects data of shape `[1, 4]` and of type `FP32`.
180+
The preidiction request body needs to be an [Open Inference Protocol](../apis/inference/v2.md) compatible payload and also match the expected inputs for the model you've deployed. In this case, the iris model expects data of shape `[1, 4]` and of type `FP32`.
177181

178182
You'll notice that the prediction results for this request come back on `outputs[0].data`.
179183

@@ -241,7 +245,7 @@ seldon model infer tfsimple1 '{"outputs":[{"name":"OUTPUT0"}], "inputs":[{"name"
241245
}
242246
```
243247

244-
You'll notice that the inputs for our tensorflow model look different from the ones we sent to the iris model. This time, we're sending two arrays of shape `[1,16]`. When sending an inference request, we can optionally chose which outputs we want back by including an `{"outputs":...}` object.
248+
You'll notice that the inputs for our tensorflow model look different from the ones we sent to the iris model. This time, we're sending two arrays of shape `[1,16]`. When sending an inference request, we can optionally chose which outputs we want back by including an `{"outputs":...}` object.
245249

246250
### Tensorflow Pipeline
247251

@@ -344,6 +348,7 @@ To run a batch inference job we'll use the [MLServer CLI](https://mlserver.readt
344348
```bash
345349
pip install mlserver
346350
```
351+
347352
### Iris Model
348353

349354
The inference job can be executed by running the following command:
@@ -632,4 +637,3 @@ And finally let's spin down our local instance of Seldon Core:
632637
```bash
633638
cd ../ && make undeploy-local
634639
```
635-

0 commit comments

Comments
 (0)