Skip to main content

Using GitLab environment specific variables in multiple jobs

· 3 min read
Stephan Hochdörfer
Head of IT Business Operations

While setting up a GitLab CI build & deployment pipeline for one of our customer projects, I had the need to expose GitLab's environment-specific variables in multiple jobs as I needed the information in the build & deployment steps. Since I needed to run the jobs on different servers, I was not able to combine both jobs.

With the current GitLab versions, it does not seem possible to expose environment-specific variables in multiple jobs as GitLab only allows to specify an environment for one specific job - which makes sense from a deployment point of view but feels weird if you look at the build pipeline in general.

While thinking about possible solutions, I came up with a (brilliant) idea: Why not dump the needed env vars, store them as an artifact, and reuse them in all other jobs? Sounds easy, but was a bit more complicated than I expected. I needed to filter out the specific environment variables, I was interested in and ignore the rest to not cause problems (due to the different servers and CI setups involved). Plus, we had to deal with ash & Bash shells in the other CI jobs.

Taking all of this into consideration, this is how the build job looks like in our CI pipeline:

build:node:staging:
stage: build
image: node:16-alpine
environment:
name: staging
url: https://staging.example.com
script:
- yarn && yarn build
- |
apk add bash
mkdir ./env
bash -c "declare -px | grep -E 'REACT|TRAEFIK|COMPOSE_PROJECT_NAME'" > ./env/variables.sh
artifacts:
paths:
- ./build
- ./node_modules
- ./env

To dump the needed environment variables on Alpine, we install bash, create a new directory ./env and invoke bash to dump the environment variables to the file ./env/variables.sh.

Since we are only interested in a subset of environment variables, those are filtered out by using grep with a (simple) regular expression. Thanks to the -E flag, grep treats the filter as a regular expression. In our case, this means we are looking for environment variables with either "REACT", "TRAEFIK" or "COMPOSE_PROJECT_NAME" in their names (or values).

Importing the environment variables in another Alpine-based job looks like this:

package:docker:staging:
stage: package
image: nexus3.loc/bitexpert/docker-compose:latest
script:
- |
apk add bash
bash -c "source ./env/variables.sh && docker-compose -f docker-compose.deploy.yml build --no-cache && docker-compose -f docker-compose.deploy.yml push web"
artifacts:
paths:
- ./build
- ./node_modules
- ./env

The trick here is to source the env var file and execute all Docker commands in the Bash subshell. You cannot "just" source (import) the env variables in Bash and continue to run the commands in ash.

The final job triggering the deployment is using a Bash shell, which means we don't need to handle the commands in a separate Bash subshell:

deploy:stage:
stage: deploy
script:
- source ./env/variables.sh
- docker compose -f docker-compose.deploy.yml pull
- docker compose -f docker-compose.deploy.yml stop || true
- docker compose -f docker-compose.deploy.yml up -d
allow_failure: false
artifacts:
paths:
- ./build
- ./node_modules
- ./env

The whole approach is not ideal but was the best solution, I could come up with. Marking the build job as one connected to the GitLab environment means that if a subsequent deployment job fails, in the GitLab UI it would still look as if the last deployment worked like a charm. Also, if you need to introduce new environment variables, the grep expression needs to be extended.