ericsysmin's DevOps Blog

Using Ansible set_fact to generate lists of objects

Using Ansible set_fact to generate lists of objects

In some cases you may want to create nearly identical objects from a list of values, or another dictionary.

This was a commonly needed ability at VMware on the NSX ALB (Avi) team as for many of our infra, and for our customers have a list of servers that we needed to build into a list of dictionaries as we require more than just a specific IP.

This is how to do it (these are tasks, not the entire playbook)

So lets review what we did.

We created the servers_list fact, we set the default value to be a blank list and then for each of the servers in pool_servers separated by , we loop adding the dict {'ip': {'addr': item, 'type': 'V4'}, 'enabled': 'true'} to the servers_list.

This can be applied to any kind of situation where you need to create a list of complicated objects.

The returned output would look like this.

 

 

Continue reading...
Accessing Raw Files on Authenticated GitLab

Accessing Raw Files on Authenticated GitLab

Recently, I started working on more repositories on GitLab. One of the common items in my Ansible testing is the use of URL lookups in the templating of my Dockerfiles in Molecule. There’s a completely different method which requires the use of the GitLab API endpoints that require different formatting and token auth. The details for this can be found here: https://docs.gitlab.com/ee/api/repository_files.html#get-raw-file-from-repository

Searching around I did find that you can pass the token via the private_token parameter to the url.

Because you need to include the folder directory as an encoded value, I had to do lots of trial and error to figure out how to do complicated strings.

Formats like this, DO NOT WORK:

But after a series of attempts, THIS WORKS:

Some explanations of my findings urlencode filter did not work when used inline in the lookup, it made no changes to the file path. To separate, I had to split it out into a jinja set to set the var to a string that included the value using format() jinja filter, then take the result and create an encoded path to meet the encoded requirements of GitLab’s API.

 

Continue reading...
Ansible Collections: Testing only what’s changed

Ansible Collections: Testing only what’s changed

Previously

When testing roles before GitHub Actions, it was assumed that you’d only have one repository for each role. But with the addition of collections, that is no longer the case. Your collection can now have multiple roles, modules, and often you do not need to test everything when a role or a set of modules has changed.

Using GitHub Actions, there’s a way to do this.

Now with GitHub Actions

Using GitHub Actions and workflow, we can configure what actions will trigger a test (workflow) run. In my example, which I do use on all of my collections, I set only on Pull Request and Push will the tests be triggered.

So if you notice in the example, configured my test to run on both push and pull_request. Unfortunately, GitHub Actions doesn’t support anchors yet so I couldn’t use them.

Why did I choose those paths?

'roles/zabbix_agent/**'  – sets GitHub actions to watch all the files underneath the role zabbix_agent

'molecule/zabbix_agent/**'  – watches all the files part of the molecule testing for zabbix_agent

'.github/workflows/zabbix_agent.yml'  – the file that runs the GitHub Action workflow itself

The code here helps ensure that only when a file used for testing or executing this role is modified will it run and ensures that you don’t waste a lot of testing time on GitHub Actions so other tests can run on other repositories. You can find more options here https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#on

Continue reading...
Using a Dockerfile Repo for Molecule Dockerfiles

Using a Dockerfile Repo for Molecule Dockerfiles

I’d like to share with you another design in testing your Ansible collections, modules, playbooks, and roles. Molecule used to include a file name Dockerfile.j2. This template, in the past, created your docker container on execution. It’s since moved away from that and now only uses the base image you provide it via molecule.yml. In some cases, you need more than what the base image offers, and you may not want to create docker images and upload them to Docker Hub, or Quay.io. I wanted a solution and test that didn’t require people to download my docker images from Docker Hub.

Dockerfile.j2 with lots of Jinja

I prefer building my images using Dockerfile each time I test. It’s relatively quick and ensures that my host is testing against the latest packages that are installed by the Dockerfile.

However, I have lots of roles, and this means each role had at least one Dockerfile, and the Dockerfiles were precisely the same. A simple change to one Dockerfile usually said I needed to update all of the others. What if I need systemd installed? SystemD is different on many operating systems, different files needed, as well as various install commands. Well, I initially started building a more complicated Dockerfile.j2,which used the platform values from Molecule. But then after adding CentOS, Debian, Ubuntu, Fedora, and many of their different versions, it got complicated.

It was overly complicated, and I was losing track of the if/then statements, “Which OS should run which commands?” and many other questions. I gave up. It’s not maintainable. Especially when there have been PR’s adding support for SUSE, and ArchLinux, so now I need to add those to my tests. Three words. OUT OF HAND. So I had to change how I tested. I’m not going to duplicate a Dockerfile that’s this complicated, 10+ times per collection. Maybe I can do file links? That worked, but then I had to manage the same files in each of my Roles/Collections. Again, not scalable. I wanted something easy to do and easy to maintain and add new OS support when needed. Then a couple of things hit me.

  • Molecule Uses Ansible (obviously)
  • Ansible has Lookup Filters

URL Lookup for Dockerfile.j2

What if I could do a URL lookup against a GitHub repository that allows me to manage the same Dockerfiles for SystemD and Ansible dependencies on all of my roles. So, I deleted all the contents of Dockerfile.j2  and replaced it with this:

{{ lookup('url', 'https://raw.githubusercontent.com/ericsysmin/docker-ansible-images/master/' ~ item.image ~ '/Dockerfile', split_lines=False) }}

So each time Molecule runs, it connects to this file, grabs the Dockerfile, and then uses it to build each docker container used by Molecule. Now I can centrally manage all of my Dockerfile files, and simplify my Dockerfiles by removing all of the if/then statements, and other logic. This does require that your system running Molecule requires internet access to the file location, if it fails, the Molecule execution will also fail.

Now in each of my roles, throughout my collections and standalone, I can modify by Dockerfiles and manage them from one location just as if I decided to produce Docker images from these Dockerfiles and then share them on Docker Hub or Quay.io.

Continue reading...
Ansible Collections: Automating the Release Process to Galaxy

Ansible Collections: Automating the Release Process to Galaxy

Since we are moving to Ansible Collections, some things are changing. Now when you create your collection and update your collection Ansible Galaxy will no longer automatically discover your collection via a Webhook. Now for Galaxy to know about your collection you have to upload a tar.gz file that containers the result of the ansible-galaxy collection build command.

However, many of us may still want to automate that process, and with @geerlingguy‘s help I was able to fully automate the release process, not from just tagging a release, but creating a release as we would before. So how does this work?

Creating the Build Directory

First, we need to create the build/  directory and include a couple of files.

Instead of having a galaxy.yml file in our root, we will need to generate the file when we execute the playbook.

This is the galaxy_deploy.yml  playbook.

You’ll then need to create a build/templates folder, and create the galaxy.yml.j2  file within the templates folder.

Edit the values to fit your Ansible Collection, the only var I use is {{ tag }}  which will be used later on.

Ok, so now that we’ve created the build components now we need to do the automation part of this. I chose to use GitHub Actions again, as they are the recommended path for the repositories sitting at https://github.com/ansible-collections.

Configuring the GitHub Action Workflow

In your .github/workflows/  folder you’ll need to create a release workflow. To do this I used the following GitHub Actions Workflow YAML. I called it release.yml , and it sits at .github/workflows/release.yml this is an example of what you can use.

Using the on  value we are able to set the workflow to only execute when a release is created in GitHub. This will ensure we have a GitHub ref to be used against the playbook. It will also sync your Ansible Galaxy release with GitHub release actions.

If you noticed we have a key here that provides our Ansible Galaxy token ${{ secrets.ANSIBLE_GALAXY_TOKEN }} for us to use this token we need to get it from Ansible Galaxy, and add it to our repository secrets. You can find your key here https://galaxy.ansible.com/me/preferences under API Key.

Ansible Galaxy Preferences Page

Within the GitHub repo go to the Settings -> Secrets.
GitHub Settings Page

Then when on that page we will add a new secret and name it ANSIBLE_GALAXY_TOKEN

GitHub Secrets Page

Now when the Workflow runs it will grab this secret from your GitHub and be able to authenticate to Ansible Galaxy.

This section tells GitHub Actions to only run this workflow when a release is created. That is done in the GitHub UI, just like you did in the past to release a new version of a role.

This section:

  • checks out the code
  • configures python 3.8 on the host
  • installs the latest version of python pip
  • installs ansible
  • then runs the playbook with the github.ref  value from the GitHub Release action

Once this is done you will have the release version uploaded automatically to your Ansible Galaxy account.

Continue reading...