Why Nomad?
Hashicorp Nomad is a simple and flexible scheduler and orchestrator that helps organizations reduce operational overhead and maximize infrastructure usage. We've been using Nomad to run our internal workloads since about 2018.
Hashicorp Nomad is a simple and flexible scheduler and orchestrator that helps organizations reduce operational overhead and maximize infrastructure usage. We've been using Nomad to run our internal workloads since about 2018.
HashiCorp Boundary provides access to applications and critical systems with fine-grained authorizations without managing credentials or exposing your network internals.
While preparing a Sylius Docker image for one of our merchants, I realized that I needed to add wkhtmltopdf to the Alpine Docker image as it is a requirement of the Sylius Invoicing Plugin we are using in the project.
As part of an AI project, I had to export data to train a tool developed for a customer. Instead of writing a script, I decided to use MySQL's built-in functions, which turned out to be a challenging experience but a valuable learning opportunity.
In the process of migrating our workloads from our old Hashicorp Nomad cluster to the new cluster, I encountered an issue where our Sonatype Nexus instance failed to start properly in the new environment.
About a year ago, our partner IONOS Cloud released an early access preview for their new Logging as a Service (LaaS) offering. The LaaS offering provides a centralized and scalable solution for logging, monitoring, and analyzing logs.
It's time for some spring-cleaning. I decided to delete a large S3 bucket we used for some backups on IONOS Cloud.
In the process of migrating our Hashicorp Nomad workload to our new Nomad cluster, I also tried to simplify our CI pipelines and ran into an issue with Nomad.
While setting up a GitLab CI build & deployment pipeline for one of our customer projects, I had the need to expose GitLab's environment-specific variables in multiple jobs as I needed the information in the build & deployment steps. Since I needed to run the jobs on different servers, I was not able to combine both jobs.