Nomad Variable Interpolation
Writing Nomad Job files isn't that hard. Once you understand the basic structure, you can quickly write job files to deploy an application on a Nomad cluster.
If you want to influence which nodes a job is running on or pass node information to your job, you use Nomad variables. By default, Nomad has plenty of variables defined, e.g., for the nodes or your current job. Additionally, you can pass variables during the deployment phase to the Nomad job. In our deployment pipeline, for example, we pass the Docker tag and the domain name to the job via a command line parameter.
Last year, I ran into a situation where the variables did not work as I expected. I wanted to deploy a service on all our nodes. Depending on the node pool in which the job is deployed, I wanted to configure a specific amount of RAM for that node pool. The service deployed on our staging or test node pool should not reserve the same amount of RAM as for our production node pool.
To solve this issue, I tried to use a map variable to store the amount of RAM that should be reserved or a specific node pool:
variable "nodepool-memory" {
type = map(number)
default = {
prod = 2048
stage = 512
test = 128
}
}
job "my-job" {
node_pool = "all"
datacenters = ["dc1"]
type = "system"
group "my-job-group" {
task "my-job-task" {
# ...
resources {
cpu = 256
memory = lookup(var.nodepool-memory, "${node.pool}", 64)
}
}
}
}
However, when trying to run the job, it did not work. After some experiments, I realized that node attributes are not interpretable in a resources block, which was confirmed later in this GitHub issue I opened.
The "best" solution I could come up with as an alternative was to define a group for each node pool and hardcode the resources there. The approach is not ideal, as it leads to a code of configuration duplication, but it seems to be the best option currently.
The job configuration now looks like this:
job "my-job" {
node_pool = "all"
datacenters = ["dc1"]
type = "system"
group "my-job-group-prod-pool" {
task "my-job-task" {
# ...
constraint {
attribute = "${node.pool}"
operator = "="
value = "master"
}
resources {
cpu = 256
memory = 2048
}
}
}
group "my-job-group-stage-pool" {
task "my-job-task" {
# ...
constraint {
attribute = "${node.pool}"
operator = "="
value = "stage"
}
resources {
cpu = 256
memory = 512
}
}
}
}
As you can see, in each task of each group, I have a constraint
section that limits where the specific task gets deployed:
constraint {
attribute = "${node.pool}"
operator = "="
value = "master"
}