ericsysmin's DevOps Blog

Import Self-Signed Cert for Chrome/Edge/Chromium on Mac

Import Self-Signed Cert for Chrome/Edge/Chromium on Mac

With the latest versions of Chrome, Edge, or Chromium, it’s no longer possible to load pages with self-signed certs and they provide a NET::ERR_CERT_INVALID error.

ERR_CERT_INVALID

When you go to the “Advanced” button it does not allow you to ignore and proceed.

To work around this you can download the certificate. Then import it into the trusted certificate store.

I used OpenSSL to download the certificate, but there are other options. Here are the commands I used.

This will add your certificate to the System Keychain and trust it as an SSL certificate.

If you get an the error:

try replacing

to

Continue reading...
Ansible Collections: Role Tests with Molecule

Ansible Collections: Role Tests with Molecule

As many of you know or are finding out, Ansible is moving to Collections. But what does that mean? Well, it’s been a long time waiting but Collections provide a way to namespace modules, roles, and playbooks that can all be combined in a single package for you to consume. It also allows businesses, partners, and contributors to update modules without adhering to the Ansible core release cycle. So, if AWS updates their API, then the modules that go with those will be instantly accessible, or at least faster than we used to wait for core releases to get those modules. But what does this mean for roles?

As many of us our finding out we are needing to move our roles to the collection design. However, now we need to figure out how to test them with the new design and using the Collection Namespaces aka FQCN (Fully Qualified Collection Name) So what we used to write

Will now end up something like this

We also are going to have a new folder structure using ansible_collections/namespace/collection_name

Luckily the molecule team and all of its contributors ensured that collections are recognized and supported. And I will cover how we can test this with GitHub Actions (which happen to also be the preferred way at this moment to test your collections on the official Ansible-Collections Github https://github.com/ansible-collections.

Since Travis-CI can’t do Matrix workflows with multiple user-provided axes, I decided, let’s just treat this as a single playbook. So the only option I found was like this.

As you see, in this format I had to make sure RedHat specific roles don’t get run on non-RedHat systems. But then this tests every role together and doesn’t easily allow me to scale to even more operating systems.  It will get long, and crazy with lots of when statements, and each time you add a role, you’ll have to edit this one and ensure the environment configuration is correct for all of the roles. It was at this point I realized maybe I should move away from Travis-CI, and also was encouraged to by the Ansible team. Gundalow and others recommended moving to GitHub Actions which are the preferred method now in the community. So I explored that option.

I also at this time decided to move away from the monolithic “default” scenario and instead divide scenarios based on roles. This is what I moved to.

Now I was able to individually ensure each environment was correct. That includes each roles’ dependencies, etc, and that it doesn’t affect the others. It prevented any form of cross-contamination on testing, and what’s expected in requirements. This fixed one of my issues I had with testing, however, it did add a bit of complexity, but each one is almost identical so I could easily copy and paste one of these to help build tests for the new role.

When I started testing how I would do this using Github Actions I explored using Matrix workflows. That looked something like this (which was really awesome because I couldn’t do it in Travis-CI…I don’t know if I can go back to Travis-CI because apparently for me GitHub tests are much faster.)

However, here’s the issue with this…each commit no matter where it’s made will trigger the matrix to execute meaning 4 x 6 tests! Even though I modified 1 role. Also, EPEL doesn’t work on Ubuntu or Debian. So then I’d have to use a lot of these exclude  statements:

Of course, that’s not scalable. So, I gave it a bit of thought. Why don’t we treat each role for what it is? It’s a separate role. Editing Role1 shouldn’t affect Role2 or even need to test Role2 in this situation. So I decided to create multiple workflows. Using GitHub Workflows I created the following structure.

Each workflow is specific to each role, and each one looks similar. This is the template I used.

You can change the operating systems if you want, this is just one of the examples I had. I solved the issue with the cross-contamination I had earlier, as well as made the tests easier to verify and check, as well as independent test state icons for the README.md. But I still had an issue. If I make a change to Role1, Role2 still builds…not desired and wastes build time against GitHub Actions.

Luckily in GitHub Actions, we can do include, or exclude paths on the trigger. So I replaced this section on: ["push", "pull_request"] with

So now the role only executes when the items specific to that role are edited. Saving me possibly money, and build time so other jobs in my GitHub can execute.

Now my completed .github/workflow/chrony.yml  looks like this:

Since these changes have been made now I am able to ensure that all of my roles are independently tested each time they are edited without treating everything as one giant repo and having tests run for 10+ minutes each as all of my roles execute. Now they are all tested in parallel, and against their own supported operating systems.

To see a copy of the repository used for this you can see https://github.com/ericsysmin/ansible-collection-system which you are free to clone, modify, change, use a reference. I did make changes to the Dockerfile because I do not host my own docker images, and don’t plan to. I highly suggested taking a look at my molecule/role_name/Dockerfile.j2  files to get an idea on what I did to get services to work. My changes to Dockerfile.j2 are based on Multi-distribution Ansible testing with Molecule on Travis-CI, and check out molecule/role_name/molecule.yml  to see how I pass through the parameters.

If you have questions please feel free to comment.

 

Continue reading...
Installing Wget via Xcode on Mac OSX

Installing Wget via Xcode on Mac OSX

Install Xcode

First, you will need to ensure that Xcode is installed, and you can install it via the App Store here https://apps.apple.com/au/app/xcode/id497799835?mt=12

Installing Wget

Next, we need to get the latest copy of Wget. We will use curl for this command.

We will extract the tar.gz

Change your directory to the wget folder that was extracted. Then configure it.

Now we need to make wget, and then install it.

Let’s test it to make sure it works

 

Continue reading...

Adding Self Signed Root Certificates

Often times you may need to have your internal systems have a non-public root certificate installed, from an internal CA. These are a few steps on how to add that root certificate.

Linux (CentOS 6)

  1. Install the ca-certificates package from yum.
  2. Enable the ability to dynamically update CA Trusts
  3. Add your certs to /etc/pki/ca-trust/source/anchors/
  4. Update your CA Trusts

 Linux (CentOS 7)

  1. Copy your root certificate to /etc/pki/ca-trust/source/anchors/
  2. Update your CA Trusts
Continue reading...

Pets vs. Cattle

Earlier I had mentioned about Pets vs Cattle, and one of my recent friends asked me what it actually means. Pets vs Cattle is a concept. Recently brought on by the advent of cloud computing and managing massive infrastructures which can be easily automated. Its extremely different than traditional systems administration, which was born from physical machines where you may need to purchase new hardware, build it, then deploy it within your datacenter.

Pets

First you give them a name. Then you installed services on that server, and if the server got corrupted or hardware failed you had to repair the computer, possibly rebuild the hard drive, fix corrupted filesystems, I could go on essentially you nursed it back to health. This is referred to as "taking care of a pet", you take care of it, you update, upgrade, and patch it. Some applications still need this attitude, but many in this decade do not! Now we've changed, and we with RESTful APIs can quickly spin up infrastructure, and build from scratch to a working machine in minutes.

Cattle

Now with cloud computing we are able to do amazing things. I can easily build a "golden image" from a running application, even build that image from scratch and use it in an autoscaled environment (something Netflix has done for years!). Using tools like Ansible, Puppet, Salt, and Chef, you can quickly build new infrastructure that is known to work. You can even use those tools to build a brand new image which can then be used to create 100 servers just with one command line entry. Instead of names these guys get numbers. All servers are essentially identical to each other. If a server dies, you issue a couple API calls (or not if you are using AWS Autoscaling or similar, as it does it for you if you want it to), and now you have replaced that server in your environment. If a cow is ill/dying you kill it, and get another. That's where this analogy originates.
Continue reading...