The Joint UK Land Environment Simulator is a land surface model that has been developed over the last 20 years by a community of UK researchers coordinated by the Met Office and the UK Centre for Ecology & Hydrology.
Existing documentation and tutorials for JULES tends to assume that the user intends to run the model on one of a small number of supported HPC systems - usually JASMIN (NERC) or Cray (Met Office) - using a particular suite of configuration and workflow management tools (Rose and Cylc).
This repository contains tools that simplify the process of setting up and running JULES on a standard personal computer running a Unix-based OS or on cloud-based services such as DataLabs, without extraneous tools and without making assumptions about the computing environment.
The following approaches are supported, or planned to be supported:
Method | sudo required during setup |
sudo required to run |
Status |
---|---|---|---|
Portable installation using Nix & Devbox | No (but more tricky without) | No | Done |
Installation using other package managers | Yes | No | Done |
Docker container | Yes | Yes | Done |
udocker-compatible container | Yes | No | Done |
Singularity/Apptainer container | Yes | No | Planned |
Important
JULES is sadly not open source (see the license). You will need to request access to the JULES source code by filling out this form. You should then be provided with a Met Office Science Repository Service (MOSRS) username and password.
Clone the repository and navigate to the repository root directory:
git clone https://github.com/NERC-CEH/portable-jules.git
cd portable-jules
Next, create a file called .env
in the root of the repository containing the following lines:
# file .env
MOSRS_USERNAME="<your MOSRS username>"
MOSRS_PASSWORD="<your MOSRS password>"
Replace <your MOSRS username>
and <your MOSRS password>
with, you guessed it, your MOSRS username and password.
The wonderful thing about Nix (and hence Devbox) is that package management is isolated from the host system (this is called hermeticity), and therefore agnostic towards your choice of Unix/Linux distribution.
The following steps should run on any reasonable Unix-based system with root privileges. If you do not have root privileges, go to this subsection.
The following steps assume you are executing commands in a bash shell with curl
(and git
) already installed. Please execute each of them individually rather than copy-pasting the whole thing.
# Install Devbox and Nix
curl -fsSL https://get.jetify.com/devbox | bash
# Download packages
devbox install
# Test that everything has worked
devbox run hello
# Download and build JULES
# NOTE: this step requires MOSRS credentials
devbox run --env-file .env setup
# Confirm that jules.exe exists in $PATH
# (should return /path/to/portable-jules/_build/build/bin/jules.exe)
devbox run which jules.exe
# Run the Loobos example
devbox run loobos
Note that all devbox
commands should be run in the repository root.
The simplest way to run a JULES simulation using portable-jules
is to run the following in any subdirectory of the portable-jules
repository root,
devbox run jules path/to/exec_dir
Under the hood, this will cd
to exec_dir
and run jules.exe > stdout.log 2> stderr.log
.
(See jules.sh
for further details.)
The above will only work if exec_dir
contains the namelist files (*.nml
). If the namelist files are instead in a subdirectory of exec_dir
(which is recommended) then this can be specified using the -n
option.
devbox run jules -n namelists path/to/exec_dir
In this example, the namelist files are located in exec_dir/namelists/
.
Tip
All relative paths specified in the namelist files are relative to exec_dir
and not the location of the namelist file itself.
Attempting to pass an absolute path with -n
will trigger an error. This is by design; locating the namelist files outside of exec_dir
is strongly discouraged.
It is possible to fire off several JULES runs at once using GNU Parallel by providing multiple values for exec_dir
,
# Specify individual exec directories...
devbox run jules -n namelists path/to/exec_1 path/to/exec_2 ...
# ...or use a wildcard
devbox run jules -n namelists path/to/dir_of_exec_dirs/*
This is useful for running large ensembles of 1+1 dimensional 'point' simulations, which includes gridded simulations that are completely decoupled in the spatial dimensions.
Note that -n namelists
is shared by all parallel runs, so each exec_dir_i
must have its own set of namelist files located in exec_dir_i/namelists/
execution directories.
To execute the devbox run jules
command from a different directory, one can specify the devbox config explicitly, as in
devbox run -c path/to/portable-jules/devbox.json jules -n namelists $(pwd)
Note that the command (jules -n namelists $(pwd)
) will actually be run from the directory where devbox.json
lives. This is a feature/limitation of devbox (see e.g. this issue). Hence, you will need to provide the path to exec_dir
even if it is your current working directory.
This amounts to installing Nix and Devbox in user namespace instead of the default locations. The subsequent devbox
commands should work just the same.
By default Nix stores packages in /nix
, which typically requires root privileges to write to. However, in principle one can choose a different location in user space.
The most up to date instructions for doing this can be found on the NixOS wiki. Currently, the easiest option seems to be to use the (sadly unmaintained) nix-user-chroot installer. This can be installed via cargo
, which can be easily installed by following these instructions.
The nix-user-chroot
instructions tell you to run unshare --user --pid echo YES
to check if your system has user namespaces enabled, which is required for this approach to work. However, I recommend instead running
unshare --user --pid --mount echo "YES"
which also checks for the ability to create bind mounts.
I mention this because it is not currently possible to create bind mounts on DataLabs (of interest to UKCEH folk), which means this approach does not work.
Installing Devbox without root privileges is also unfortunately a bit of a hassle (see this issue).
First, download the devbox install script using
curl --silent --show-error --fail --location --output ./devbox_install "https://get.jetify.com/devbox"
Next edit it to do the following:
- Change
/usr/local/bin
to a location in user space, e.g./$HOME/.local/bin
- Remove the
(command -v sudo || true)
part from the beginning of the relevant line.
Finally, run the script
chmod u+x ./devbox_install
./devbox_install
Of course, Devbox/Nix is just one (very convenient) option for installing the necessary libraries. You are free to use your preferred package manager to do so.
You may want to refer to devbox.json
to see what packages are required, and then look up their names in the other package manager. For example, on Ubuntu I would do the following:
sudo apt update
sudo apt install --yes \
coreutils \
curl \
diffutils \
git \
gfortran \
glibc-source \
make \
libnetcdf-dev \
libnetcdff-dev \
parallel \
perl \
subversion
You will need to set some environment variables before running the setup script. For a 'basic' installation these will be:
FCM_ROOT
: location to download FCMJULES_ROOT
: location to download JULESJULES_BUILD_DIR
: location for JULES buildJULES_NETCDF
: flag for whether to use netcdf or not (this should be set tonetcdf
)JULES_NETCDF_PATH
: path to a location containing containing the netcdf include directory (the filenetcdf.mod
should be found in$JULES_NETCDF_PATH/include
.)
See the JULES documentation for a full list of environment variables.
It is convenient to store these in a file, as this example shows.
# .env
FCM_ROOT=/path/to/portable-jules/_download/fcm
JULES_ROOT=/path/to/portable-jules/_download/jules
JULES_BUILD_DIR=/path/to/portable-jules/_build
JULES_NETCDF=netcdf
JULES_NETCDF_PATH=/usr # works for ubuntu after apt installing netcdf
JULES_REVISION=30414 # vn7.9
# NOTE: omit MOSRS_PASSWORD to avoid exporting
Finally, you should be able to run the setup and run scripts in the usual way:
# Make executable
chmod +x setup.sh jules.sh
set -a # causes `source .env` to export all variables
source .env
set +a
# Download and build
./setup.sh -u <mosrs_username> -p '<mosrs_password>'
# Run jules
./jules.sh -d /path/to/run_dir /path/to/namelists_dir
You might consider passing your MOSRS credentials as command-line arguments to setup.sh
instead of keeping them in the .env
file, to avoid making them globally available as environment variables.
Note the use of single quotation marks, which ensures the password is treated as a literal string, so any illegal characters don't mess things up.
Important
The JULES license (Sec. 4.1.2) prohibits distribution of JULES source code. This means it is not permitted to share container images, e.g. by uploading them to Dockerhub. Unfortunately, if you want to run dockerised JULES you have to build the container yourself, using your own MOSRS credentials.
As part of the process of building a container image, the JULES source code needs to be downloaded, which requires MOSRS credentials. We cannot simply copy .env
into the container since that would mean anyone could spin up the container and inspect it. We need to expose the contents of .env
during the build in a secure way.
This solution is to use a secret mount. In the following example, we mount .env
during the build:
docker build --secret id=.env -t jules:vn7.9 .
The contents of .env
are then accessible using: RUN --mount=type=secret,id=.env,target=/devbox/.env
(the WORKDIR
is /devbox
at this point).
JULES still needs to load the namelists and inputs, and we did not include these in the container itself. To run the container you need to link the run directory and the namelists directory to the container filesystem. You can mount the run directory (assuming the namelists directory is below it) to an unused location in the container filesystem (/devbox/run
in this example).
cd examples/loobos
docker run -v "$(pwd)":/devbox/run jules:vn7.9 -d run run/config
It will speed things up if the directory being linked is not too large, i.e. if the run directory (examples/loobos
above) only contains the necessary inputs and namelists, and not a bunch of other stuff.
udocker somewhat advertises itself as a drop-in replacement for docker that does not require root privileges. In practice it seems to have quite a few quirks.
- Build a container image as usual, but using a different dockerfile (
Dockerfile.u
)
docker build --secret id=.env -f Dockerfile.u -t jules .
Tip
Funnily enough, this container will not run with Docker, since the workdir is /
instead of /root
as it is when run with udocker.
I do not know why this is. To run it with docker, use --workdir=/root
.
- Save the image to a tar.gz
docker save jules | gzip > jules.tar.gz
- Load into udocker - create an image called
jules
udocker load -i jules.tar.gz jules
NOTE: this can 'silently' fail. The output should look something like this:
...
Info: adding layer: sha256:95a2005e07300a41ffbbb0aa02d8974f8f0c0331285db444288cc15da96d8613
['jules:latest']
and not this:
...
Info: adding layer: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
[]
- Create a container (you can run the image directly but it will create a container each time which is a waste of time and resources)
udocker create --name=jules jules
- Run, while mounting the directories in a very specific way.
udocker run -v=$(pwd)/examples/loobos:/root/run jules -d /root/run /root/run/config
Note that the working directory at run time will be /root/
. You should bind the run directory to a new location in the container (e.g. /root/run
) and then pass the absolute paths in the container as arguments, -d RUN_DIR NAMELISTS_DIR
.
It's pretty messy and very brittle, but just getting this to work at all took a LONG time.
To do.
By default, ./setup.sh
or devbox run setup
will download the most recent revision of JULES (i.e. HEAD
). However, one can specify a revision by passing an optional argument with the -r
flag, as in ./setup.sh -r <rev>
or devbox run setup -r <rev>
, or by setting the environment variables JULES_REVISION
.
The following (copied from here) maps named versions of JULES to revision identifiers. To download version 7.8, for example, one would do ./setup.sh -r 29791
or devbox run setup -r 29791
.
vn3.1 = 11
vn3.2 = 27
vn3.3 = 52
vn3.4 = 65
vn3.4.1 = 67
vn4.0 = 101
vn4.1 = 131
vn4.2 = 793
vn4.3 = 1511
vn4.3.1 = 1709
vn4.3.2 = 1978
vn4.4 = 2461
vn4.5 = 3197
vn4.6 = 4285
vn4.7 = 5320
vn4.8 = 6925
vn4.9 = 8484
vn5.0 = 9522
vn5.1 = 10836
vn5.2 = 12251
vn5.3 = 13249
vn5.4 = 14197
vn5.5 = 15100
vn5.6 = 15927
vn5.7 = 16960
vn5.8 = 17881
vn5.9 = 18812
vn6.0 = 19395
vn6.1 = 20512
vn6.2 = 21512
vn6.3 = 22411
vn7.0 = 23518
vn7.1 = 24383
vn7.2 = 25256
vn7.3 = 25896
vn7.4 = 26897
vn7.5 = 28091
vn7.6 = 28692
vn7.7 = 29181
vn7.8 = 29791
vn7.8.1 = 29986
vn7.9 = 30414