ericsysmin's DevOps Blog

Accessing Raw Files on Authenticated GitLab

Accessing Raw Files on Authenticated GitLab

Recently, I started working on more repositories on GitLab. One of the common items in my Ansible testing is the use of URL lookups in the templating of my Dockerfiles in Molecule. There’s a completely different method which requires the use of the GitLab API endpoints that require different formatting and token auth. The details for this can be found here: https://docs.gitlab.com/ee/api/repository_files.html#get-raw-file-from-repository

Searching around I did find that you can pass the token via the private_token parameter to the url.

Because you need to include the folder directory as an encoded value, I had to do lots of trial and error to figure out how to do complicated strings.

Formats like this, DO NOT WORK:

But after a series of attempts, THIS WORKS:

Some explanations of my findings urlencode filter did not work when used inline in the lookup, it made no changes to the file path. To separate, I had to split it out into a jinja set to set the var to a string that included the value using format() jinja filter, then take the result and create an encoded path to meet the encoded requirements of GitLab’s API.

 

Continue reading...
Using a Dockerfile Repo for Molecule Dockerfiles

Using a Dockerfile Repo for Molecule Dockerfiles

I’d like to share with you another design in testing your Ansible collections, modules, playbooks, and roles. Molecule used to include a file name Dockerfile.j2. This template, in the past, created your docker container on execution. It’s since moved away from that and now only uses the base image you provide it via molecule.yml. In some cases, you need more than what the base image offers, and you may not want to create docker images and upload them to Docker Hub, or Quay.io. I wanted a solution and test that didn’t require people to download my docker images from Docker Hub.

Dockerfile.j2 with lots of Jinja

I prefer building my images using Dockerfile each time I test. It’s relatively quick and ensures that my host is testing against the latest packages that are installed by the Dockerfile.

However, I have lots of roles, and this means each role had at least one Dockerfile, and the Dockerfiles were precisely the same. A simple change to one Dockerfile usually said I needed to update all of the others. What if I need systemd installed? SystemD is different on many operating systems, different files needed, as well as various install commands. Well, I initially started building a more complicated Dockerfile.j2,which used the platform values from Molecule. But then after adding CentOS, Debian, Ubuntu, Fedora, and many of their different versions, it got complicated.

It was overly complicated, and I was losing track of the if/then statements, “Which OS should run which commands?” and many other questions. I gave up. It’s not maintainable. Especially when there have been PR’s adding support for SUSE, and ArchLinux, so now I need to add those to my tests. Three words. OUT OF HAND. So I had to change how I tested. I’m not going to duplicate a Dockerfile that’s this complicated, 10+ times per collection. Maybe I can do file links? That worked, but then I had to manage the same files in each of my Roles/Collections. Again, not scalable. I wanted something easy to do and easy to maintain and add new OS support when needed. Then a couple of things hit me.

  • Molecule Uses Ansible (obviously)
  • Ansible has Lookup Filters

URL Lookup for Dockerfile.j2

What if I could do a URL lookup against a GitHub repository that allows me to manage the same Dockerfiles for SystemD and Ansible dependencies on all of my roles. So, I deleted all the contents of Dockerfile.j2  and replaced it with this:

{{ lookup('url', 'https://raw.githubusercontent.com/ericsysmin/docker-ansible-images/master/' ~ item.image ~ '/Dockerfile', split_lines=False) }}

So each time Molecule runs, it connects to this file, grabs the Dockerfile, and then uses it to build each docker container used by Molecule. Now I can centrally manage all of my Dockerfile files, and simplify my Dockerfiles by removing all of the if/then statements, and other logic. This does require that your system running Molecule requires internet access to the file location, if it fails, the Molecule execution will also fail.

Now in each of my roles, throughout my collections and standalone, I can modify by Dockerfiles and manage them from one location just as if I decided to produce Docker images from these Dockerfiles and then share them on Docker Hub or Quay.io.

Continue reading...