This utility updates multiple datasets/layer available in LINZ (as well as others') Data Service.
- Takes the layer ids from the user
- Creates a draft id for the each layer id in the list.
- Use the genertaed draft id to trigger the import
- Use the layer id and draft id to update the existing layer and publish the new dataset.
A config.yaml file must be provided. This can be created by editing the provided config_tempate.yaml file.
Connection:
Api_key: key <ADMIN API KEY> # Not Recommended. Should be stored as envi var
Domain: <Data Service Domain> # e.g. data.linz.govt.nz
lds_page_type: <layers> # add either layers/tables
Datasets:
Layers: <Layers to Process> # A list of Layers or Table ids
# e.g. [93639,93648, 93649]
Groups:
group: <group name> #add the group name to which the layer belongs
API Key
The (LINZ) Data Service API key must be generated with the required permissions to update data in bulk. It is recommend that a API key is created specifically for this task.
The API KEY must have the following permission enabled against it. You will need admin rights to be able to enable all of the below
- Query layer data
- Search and view tables and layers
- Create, edit and delete tables and layers
- View the data catalog and access catalog services (CS-W)
For LDS users, your API key can be managed here
There are two options for storing your API Key where the script can utilise the key
for authentication. The API key can be entered in the config.yml or stored as
an environmental variable. Storing the API Key as an environmental variable is
the safest and therefore recommended way to do this. The environment variable
the key is to be assigned to must be LDS_APIKEY=<lds_apikey>
Once the config.yaml file has been updated simply run
cd bulkdata_updater
(if installed via the recommended setup.py method)
python bulkdata_updater.py
This is so far an initial minimum viable product release.
Please supply any feedback and bug reports to the projects GitHub Issues page