You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/reference/api-server-endpoints.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -69,7 +69,7 @@ Asynchronous jobs are managed using [Dask](https://docs.dask.org/en/stable/). By
69
69
The following CLI flags are available to configure the asynchronous generate endpoint when using `nat serve`:
70
70
* --scheduler_address: The address of an existing Dask scheduler to connect to. If not set, a local Dask cluster will be created.
71
71
* --db_url: The [SQLAlchemy database](https://docs.sqlalchemy.org/en/20/core/engines.html#database-urls) URL to use for storing job history and metadata. If not set, a temporary SQLite database will be created.
72
-
* --max_concurrent_jobs: The maximum number of asynchronous jobs to run concurrently. This controls the number of Dask workers created when a local Dask cluster is used. Default is 10. This is only used when `scheduler_address` is not set.
72
+
* --max_concurrent_jobs: The maximum number of asynchronous jobs to run concurrently. Default is 10. This is only used when `scheduler_address` is not set.
73
73
* --dask_workers: The type of Dask workers to use. Options are `threads` for Threaded Dask workers or `processes` for Process based Dask workers. Default is `processes`. This is only used when `scheduler_address` is not set.
74
74
* --dask_log_level: The logging level for Dask. Default is `WARNING`.
0 commit comments