Our Docusaurus CI setup
It's been a few months sine we migrated from Silverstripe to Docusaurus for our blog. Quite a few blog posts have been published since then via the CI pipeline we've set up. This blog post covers how we do things.
Docusaurus is hosted in our on-premise version of GitLab, for CI we make heavy use of GitLab CI. Both the staging and the production instance both run in Docker containers on top of our Nomad cluster.
Each blog post will have its own feature branch which is turned into a Merge Request in GitLab. The Merge Requests are marked as "Draft" to avoid that we accidentally merge and publish a blog post before we actually want it to publish. Sadly, GitLab does not yet support auto merging Merge Requests. That means currently, we have to manually keep track of when to merge & publish a blog post. I am still looking for some automation support, but for now, I can live with the current status quo. The build and deploy processes are kicked off by creating a new tag in the master branch.
In GitLab CI, we have configured the following stages:
stages:
- test
- build
- staging
- production
In the test
stage, we try to build the Docusaurus blog. Since we haven't yet made any customizations of the Docusaurus
logic, building the blog is enough to ensure that everything "works". The job of the test stage is only triggered when
a new Merge Request is opened. Since we don't allow pushes directly on master, that's enough to ensure that master won't
break.
test:build:
stage: test
image: node:16.14.0-alpine
only:
- merge_requests
script:
- cd blog
- npm i
- npm run build
In the build
stage, we run the code to create a new Docker image based on the files in the repository. For that, we
run the docker-compose build
with a separate compose build file docker-compose.deploy.yml
and then push the image
to our internal Docker registry.
Thank to Docusaurus being a static site generator, the image we build is "just" a nginx instance including all the statically generated files and images.
docker:
stage: build
image: bitexpert.loc/docker-compose:latest
only:
- tags
except:
- branches
script:
- |
docker-compose -f docker-compose.deploy.yml build --no-cache --parallel
docker push bitexpert.loc/blog:"${CI_COMMIT_TAG}"
For the staging and production deployments, we use Terraform and its Nomad provider. Both jobs are mostly identical, the Nomad job definition is slightly different (mostly the domain configuration):
deploy:stage:
stage: staging
only:
- tags
except:
- branches
environment:
name: staging
url: https://blog.bitexpert.loc
script:
- cd terraform/stage
- terraform --version
- terraform init
- terraform validate
- terraform plan -out "planfile"
- terraform apply -input=false "planfile"
Compared to the staging deployment which happens automatically during the build, the production deployment needs manual
approval due to the when: manual
trigger. Once the staging site is updated, we'll have a look and then manually trigger
the production deployment.
deploy:prod:
stage: production
only:
- tags
except:
- branches
environment:
name: production
url: https://blog.bitexpert.de
script:
- cd terraform/prod
- terraform --version
- terraform init
- terraform validate
- terraform plan -out "planfile"
- terraform apply -input=false "planfile"
when: manual
In general, I am quite happy with our setup. The only improvement I am looking to solve is automatically merging the Merge requests and deploying the blog posts automatically.