A barebone supervisor application for starting and managing distributed Erlang nodes. This application provides a centralized way to configure and supervise multiple Erlang nodes with different configurations.
- Node Supervision: Automatically start and supervise multiple Erlang nodes
- Flexible Configuration: Configure nodes with different root directories and boot scripts
- CPU Affinity: Support for CPU affinity on Linux systems
- Configuration Management: Support for both inline configuration and external config files
$ rebar3 compile
grisp_supervisor has been designed to just boot as it is and then consult an erlang term file called /etc/grisp_supervisor.config
on the file system. Here, a sequence of node configurations are defined.
The config file path is set as default, but it can be customized through the application env.
grisp_supervisor starts each node using its own root supervisor.
This is achieved using peer:start_link
as child start option.
Here's a complete example configuration:
i.e. /etc/grisp_supervisor.config
% Simple node with default settings
worker1.
% Node with custom release directory
{worker2, #{
root_dir => "/opt/myapp/release",
boot => "start"
}}.
% Node with CPU affinity (Linux only)
{worker3, #{
root_dir => "/opt/myapp/release",
boot => "start",
cpu => "1",
config => "sys.config"
}}.
% Node with custom VM arguments
{worker4, #{
root_dir => "/opt/myapp/release",
boot => "start",
vm_args => [
"-kernel", "inet_dist_use_interface", "{127,0,0,1}"
]
}}.
Each node in the nodes
list can be configured with the following options:
- Type:
string()
- Default:
code:root_dir()
- Description: Root directory of the Erlang release to start.
{node1, #{
root_dir => "/opt/myapp/release"
}}
- Type:
string()
- Default:
"no_dot_erlang"
- Description: Boot script name (without .boot extension) to use when starting the node.
{node1, #{
boot => "start" % Uses start.boot
}}
- Type:
string()
- Default: none
- Description: Configuration file to use for the node. Can be "sys.config" for the default or a full path.
{node1, #{
config => "sys.config" % Uses default sys.config
}}
{node2, #{
config => "/path/to/custom.config" % Uses custom config file
}}
- Type:
string()
- Default: Read from
start_erl.data
- Description: Release version to use. Overrides the version found in start_erl.data.
{node1, #{
release_vsn => "0.1.0"
}}
- Type:
string()
- Default: Read from
start_erl.data
- Description: ERTS (Erlang Runtime System) version to use.
{node1, #{
erts_vsn => "13.0"
}}
- Type:
string()
- Default:
undefined
- Description: Requires
taskset
. This gets mapped to thetaskset
option"-c"
to select one or more CPU cores to bind the node to (Linux only).
{node1, #{
cpu => "0" % Bind to CPU core 0
cpu => "0,5,8-11" % Bind to CPU cores 0 5 and range 8-11
}}
- Type:
[string()]
- Default:
[]
- Description: Additional VM arguments to pass to the supervised node. These arguments are added after the standard boot arguments.
{node1, #{
vm_args => [
"-kernel", "inet_dist_use_interface", "{127,0,0,1}",
"-kernel", "inet_dist_listen_min", "9100",
"-kernel", "inet_dist_listen_max", "9200"
]
}}
For development you can edit sys.config
and test various nodes setups.
$ rebar3 shell --sname grisp_supervisor
You can override the default behaviour by customizing the application env variables in the sys.config file.
- Type:
string()
- Default:
"/etc/grisp_supervisor.config"
- Description: Path to an external term file containing node definitions. If nodes are defined inline, this file is ignored.
{grisp_supervisor, [
{config_file, "/path/to/custom/config"}
]}
- Type:
[node_config()]
- Default:
[]
- Description: List of nodes to supervise. Each node can be configured with various options.
{grisp_supervisor, [
{nodes, [
node1, % Simple node with default configuration
{node2, #{ % Node with custom configuration
root_dir => "/path/to/release",
boot => "start",
config => "sys.config",
vm_args => [
"-kernel", "inet_dist_use_interface", "{127,0,0,1}"
]
}}
]}
]}