Enabling Continuous Delivery with Github Actions

Enabling Continuous Delivery with Github Actions

One of the dreams I have had for my site is to be able to build updates push them to GitHub and for the update process to be automatic, finally I have achieved this with GitHub Actions.

I am not the biggest fan of server architecture or about how to optimise my Linux experience, in fact these processes actually make me feel that I am just spending time in areas that detract from my normal coding and generally I would rather be coding than dealing with these fiddly things that always take ages to setup and I tend to forget the process after not touching them for a few months.

I originally wanted to add a continuous delivery (CD) pipeline a few months ago, I got the itch and installed Jenkins to help build jobs. It was a good idea at the time but now looking back on it totally not required as I never used it and it just eats up so much server memory. Jenkins seemed overkill and complex and I didn't really want to spend all my time trying to get it working but then came GitHub Actions.

Looking back on it now I could have done this exact process with Docker Hub and it wouldn't be that difficult to do actually as I have used Docker Hub with my current process just that I guess I never really looked into how to achieve what I wanted before and somewhat lost in a sea of Dev Ops I didn't really know where to go with it.

At work we had just started to use GitHub Actions and Packages and luckily I signed up for them a while back so go in pretty early and with some working examples I was able to get a build process going. Actions is pretty nice it allows you to integrate with a huge array of different repositories along with a load of power making it a pretty great choice, below is my production build yaml file.

      - master

    runs-on: ubuntu-latest

      - uses: actions/checkout@v1
          ref: ${{ github.head_ref }}
      - name: Build the Docker image
        run: |
          echo "production build started"
          COMMIT_ID=`echo $(git rev-parse HEAD)`
          COMMIT_ID_SHORT=`echo $(git rev-parse HEAD) | cut -c1-7`
          docker login -u ${{secrets.DOCKER_USERNAME}} -p ${{secrets.DOCKER_PASSWORD}}          
          echo "new image tag created $IMAGE_TAG"
          docker build --file Dockerfile 
          --build-arg JWT_SECRET=${{secrets.JWT_SECRET}} 
          --build-arg GIT_API_KEY=${{secrets.GIT_API_KEY}} 
          --build-arg GHOST_ADMIN_KEY=${{secrets.GHOST_ADMIN_KEY}} 	  
          --build-arg GHOST_CONTENT_KEY=${{secrets.GHOST_CONTENT_KEY}}
          --build-arg DB_HOST=${{secrets.DB_HOST}} --build-arg DB_USERNAME=${{secrets.DB_USERNAME}} 
          --build-arg DB_PASSWORD=${{secrets.DB_PASSWORD}} 
          --build-arg DB_NAME=${{secrets.DB_NAME}} --build-arg GHOST_API_URL=https://blog.alexwilkinson.co 
          --tag $IMAGE_TAG 
          --tag $IMAGE_TAG_LATEST .
          docker push $IMAGE_TAG_LATEST
          echo "production build: $IMAGE_TAG successfully created"

The above job is done when a new commit is pushed to the master branch which is where the on section is used for, it can be used with a whole array of different methods and devices to really target specific actions performed in the repository.

The job section will list the commands on what should happen in this action and the steps along with the type of machine to compute on, there is a whole range of different machines and they provide free use of the compute power up until you exceed the build minutes but for small personal projects this shouldn't be too big an issue.

The steps is where the real magic happens and start to outline the overall processes that will take place, the uses is pretty interesting because it just pulls in a GitHub repo to do the work so it is actually really flexible and has so much potential to be used in many different ways. The with just brings in additional dependencies to be used in the steps and then there is the run which goes step by step doing:

  • Getting the commit Id and the short version of that id
  • login to Docker, my this case Docker Hub
  • Create the image tags for Docker Hub repo with latest and commit ID
  • Build the docker image and adding in the env variables from the GitHub secrets
  • tag the images
  • push the complete image to docker hub

Where Next

Now that I have my image in Docker Hub, can also use GitHub packages as well I just made the change as I was testing things but will probably move back to GitHub packages after. Now the image is in Docker Hub and quite accessible across the internet which is great, it can also be put in a private repo to keep it hidden but it does slightly change the last steps.

For me I just wanted a simple solution to enable CD on my main site, and probably other larger projects I work on. I tried Jenkins, didn't really get on with it. I also tried Kubernetes as well but that proved to be very much overkill and very much mor expensive then I would have hoped. It turns out there is a great little project called WatchTower which sets up as a Docker Image on your server and will watch the images you pull from repositories but executing them as following:

docker run -d --name <my-project> <username>/<my-project>

A lot of this was done with the help of this awesome article here that highlighted WatchTower for me and provided a great walk through of the process.


Dev Ops has always been an area that has scared me and I never like to get tangled in server complexaties, however, dealing with software it is important to have some level of understanding of all parts of the pipeline, even if you don't expect to ever work in those areas professionally. CD for personal projects is great it allows for much more timely updates and enhances the long term maintenance by encouraging better practices in updating elements and at the end of it all, it is really just quite magical to write code, push to master and watch the pipeline push the new code driectly to the working server! That never get old!