Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CREATE_WORKLOAD_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ By default, workloads created will come with the following operations run in the

To invoke the newly created workload, run the following:
```
$ opensearch-benchmark run-test \
$ opensearch-benchmark run \
--pipeline="benchmark-only" \
--workload-path="<PATH OUTPUTTED IN THE OUTPUT OF THE CREATE-WORKLOAD COMMAND>" \
--target-host="<CLUSTER ENDPOINT>" \
Expand Down
2 changes: 1 addition & 1 deletion DEVELOPER_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ Now, you have a local cluster running! You can connect to this and run the workl

Here's a sample run of the geonames benchmark which can be found from the [workloads](https://github.com/opensearch-project/opensearch-benchmark-workloads) repo.
```
opensearch-benchmark run-test --pipeline=benchmark-only --workload=geonames --target-host=127.0.0.1:9200 --test-mode --workload-params '{"number_of_shards":"1","number_of_replicas":"0"}'
opensearch-benchmark run --pipeline=benchmark-only --workload=geonames --target-host=127.0.0.1:9200 --test-mode --workload-params '{"number_of_shards":"1","number_of_replicas":"0"}'
```

And we're done! You should be seeing the performance metrics soon enough!
Expand Down
10 changes: 5 additions & 5 deletions PYTHON_SUPPORT_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,17 @@ supported_python_versions = [(3, 8), (3, 9), (3, 10), (3, 11), (3, 12)]

**Basic OpenSearch Benchmark command with distribution version and test mode**
```
opensearch-benchmark run-test --distribution-version=1.0.0 --workload=geonames --test-mode
opensearch-benchmark run --distribution-version=1.0.0 --workload=geonames --test-mode
```

**OpenSearch Benchmark command running test on target-host in test mode**
```
opensearch-benchmark run-test --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'" --test-mode"
opensearch-benchmark run --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'" --test-mode"
```

**OpenSearch-Benchmark command running test on target-host without test mode**
```
opensearch-benchmark run-test --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'"
opensearch-benchmark run --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'"
```

To ensure that users are using the correct python versions, install the repository with `python3 -m pip install -e .` and run `which opensearch-benchmark` to get the path. Pre-append this path to each of the three commands above and re-run them in the command line.
Expand All @@ -46,12 +46,12 @@ Keep in mind the file path outputted differs for each operating system and might

- For example: When running `which opensearch-benchmark` on an Ubuntu environment, the commad line outputs `/home/ubuntu/.pyenv/shims/opensearch-benchmark`. On closer inspection, the path points to a shell script. Thus, to invoke OpenSearch Benchmark, pre-=append the OpenSearch Benchmark command with `bash` and the path outputted earlier:
```
bash -x /home/ubuntu/.pyenv/shims/opensearch-benchmark run-test --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'"
bash -x /home/ubuntu/.pyenv/shims/opensearch-benchmark run --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'"
```

- Another example: When running `which opensearch-benchmark` on an Amazon Linux 2 environment, the command line outputs `~/.local/bin/opensearch-benchmark`. On closer inspection, the path points to a Python script. Thus, to invoke OpenSearch Benchmark, pre-append the OpenSearch Benchmark command with `python3` and the path outputted earlier:
```
python3 ~/.local/bin/opensearch-benchmark run-test --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'"
python3 ~/.local/bin/opensearch-benchmark run --workload=geonames --pipeline=benchmark-only --target-host="<OPENSEARCH CLUSTER ENDPOINT>" --client-options="basic_auth_user:'<USERNAME>',basic_auth_password:'<PASSWORD>'"
```

### Creating a Pull Request After Adding Changes and Testing Them Out
Expand Down
2 changes: 1 addition & 1 deletion it/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ def run_test(cfg, command_line):
This method should be used for benchmark invocations of the test_run command.
It sets up some defaults for how the integration tests expect to run test_runs.
"""
return osbenchmark(cfg, f"run-test {command_line} --kill-running-processes --on-error='abort'")
return osbenchmark(cfg, f"run {command_line} --kill-running-processes --on-error='abort'")


def shell_cmd(command_line):
Expand Down
4 changes: 2 additions & 2 deletions osbenchmark/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -1093,9 +1093,9 @@ def prepare_test_runs_dict(args, cfg):
return test_runs_dict

def configure_test(arg_parser, args, cfg):
# As the run-test command is doing more work than necessary at the moment, we duplicate several parameters
# As the run command is doing more work than necessary at the moment, we duplicate several parameters
# in this section that actually belong to dedicated subcommands (like install, start or stop). Over time
# these duplicated parameters will vanish as we move towards dedicated subcommands and use "run-test" only
# these duplicated parameters will vanish as we move towards dedicated subcommands and use "run" only
# to run the actual benchmark (i.e. generating load).
print_test_run_id(args)
if args.effective_start_date:
Expand Down
2 changes: 1 addition & 1 deletion osbenchmark/workload_generator/workload_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,4 +95,4 @@ def create_workload(cfg):
custom_workload_writer.render_templates(template_vars, custom_workload.queries)

console.println("")
console.info(f"Workload {workload_name} has been created. Run it with: {PROGRAM_NAME} run-test --workload-path={custom_workload.workload_path}")
console.info(f"Workload {workload_name} has been created. Run it with: {PROGRAM_NAME} run --workload-path={custom_workload.workload_path}")
2 changes: 1 addition & 1 deletion samples/ccr/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -147,4 +147,4 @@ EOF


# Start OpenSearch Benchmark
opensearch-benchmark run-test --configuration-name=metricstore --workload=geonames --target-hosts=./ccr-target-hosts.json --pipeline=benchmark-only --workload-params="number_of_replicas:1" --client-options=./ccr-client-options.json --kill-running-processes --telemetry="ccr-stats" --telemetry-params=./ccr-telemetry-param.json
opensearch-benchmark run --configuration-name=metricstore --workload=geonames --target-hosts=./ccr-target-hosts.json --pipeline=benchmark-only --workload-params="number_of_replicas:1" --client-options=./ccr-client-options.json --kill-running-processes --telemetry="ccr-stats" --telemetry-params=./ccr-telemetry-param.json
2 changes: 1 addition & 1 deletion scripts/expand-data-corpus.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@

$ expand-data-corpus.py --corpus-size 100 --output-file-suffix 100gb

$ opensearch-benchmark run-test --workload http_logs \\
$ opensearch-benchmark run --workload http_logs \\
--workload_params=generated_corpus:t ...

The script generates new documents by duplicating ones in the existing
Expand Down
Loading