Posted by & filed under CI, Deployments, Docker, Web development.

CircleCI has quickly become one of the dominating continuous integration platforms available today, boasting support for Docker, Android, and now iOS through the use of OS X virtual machines.

GitHub has been the de facto standard for hosting Git repositories for some time now. While worthy competitors do exist, GitHub‘s ability to innovate around a piece of freely available version control software adds significant value to its brand.

While both CircleCI and GitHub technically work in their own light, their combination into a software development workflow creates a valuable synergy.

Since there are many ways to setup a development workflow, this article will describe one particular way that we’ve integrated both GitHub and CircleCI into our workflow. There may certainly be other ways that work better for your team, so only use this brief guide as one example.

Our development flow is very similar to http://nvie.com/posts/a-successful-git-branching-model/ — with the exception being how we integrate CircleCI. Our circle.yml file looks a bit like this:

test:
  # Test overrides

deployment:
  development:
    branch: dev
    owner: REPO_OWNER
    commands:
     - tools/deploy.sh
  rc:
    branch: /release-.*/
    owner: REPO_OWNER
    commands:
      - tools/merge_master.sh
  production:
    branch: master
    owner: REPO_OWNER
    commands:
      - tools/deploy.sh

After our tests complete we have three separate deployment sections that handle unique scenarios based on the branch being built.

dev branch

Each time our dev branch has a feature branch merged into it, a build is triggrered, tests are run, and a deploy script pushes our Docker images to our private registry. Once that process completes, the same script deploy the Docker images to our Elastic Beanstalk environment

release branches

We use release branches to prevent our master branch from containing untested code. When we’re ready to ship, we use our Slack bot, Hubot, to create a new branch from a recent commit and prefix the name with “release-“. Instead of actually deploying anything, this deployment section runs our merge_master.sh script which merges the release branch with master and pushes it to GitHub, triggering a build on master.

While this does add an extra step and a bit of extra build time, it ensures that our master branch is always tested and deployable at any given time. Release branches also give us the opportunity to easily deploy releases to our QA environment for testing and create an easy path for patches.

master branch

Our master branch is ultimately responsible for deployments. Once our release branch is merged, a build on master is triggered, and master successfully builds, the Docker images are pushed to our private registry and a new Elastic Beanstalk version is created.

When we’re ready to ship, a command to Hubot deploys the given Elastic Beanstalk version to our environment.

Protecting master

Our workflow dictates that master should always be stable and deployable. To strengthen this concept we protect our master branch in GitHub which prevents force pushes and merges from untested branches from occurring.

The combination of GitHub and CircleCI provides us with a clean and relatively simple workflow that supports staged deployments, a stable master branch, and continuous integration.

Posted by & filed under CI, Deployments, DevOps, Web development.

It’s always wise to keep a non-git-log changelog in your code repository. A changelog provides a way to explain what changes were made in a more verbose and reader-friendly way than a terse commit log. Even more valuable is providing this information to your team.

While anyone can certainly look at your repository and view the recent additions to the changelog, if you’ve already integrated Slack into your workflow, it’s incredibly simple to leverage that same communication channel to broadcast the latest updates to your changelog.

Where ever your code deployments occur (we use CircleCI) a simple script that reads the top of your changelog and publishes that to a Slack channel can be really valuable.

While there are some Linux utilities that can do similar things, we use a short Python script that prints out the lines from the start of a file until it reaches a double newline:

#!/usr/bin/env python

import sys

line_list = []

for line in sys.stdin.xreadlines():
    if line == '\n':
        print ''.join(line_list),
        exit()

    line_list.append('{}\n'.format(line.strip()))

 

We send the output of this to our Slack channel at deployment time (a simple HTT POST to our webhook URL), alerting everyone in the channel to the updates.

 

Posted by & filed under aws, DevOps, Docker.

When you read about Hubot you read about HerokuHeroku seems to be the defacto hosting environment for Hubot, so much that even Hubot‘s own documentation makes reference to hosting it on Heroku — mainly because it can be setup in a snap and is free as long as you keep your usage levels on the low side.

Read more »

Posted by & filed under Deployments, DevOps, Docker, Web development.

MySQL still plays a large part in many software stacks and while many IaaS vendors have their own hosted versions (i.e. Amazon RDS), it’s still fairly common to run MySQL in a Docker container, especially in development environments.

One common problem that’s encountered with MySQL is initializing it before its use and having your application connect only after initialization is complete. While some find it acceptable to include initialization code in their application startup, in my own projects I prefer a run script that handles initialization as part of starting the container’s process and then have the application startup gracefully handle the connection.

Read more »

Posted by & filed under CI, Deployments, DevOps, Docker, Go, Web development.

After using Jenkins for some time, the natural progression towards cheaper and simpler alternatives kicked in. While not the prettiest thing to look at, Jenkins served us well, but the costs involved with running at least one full-time AWS instance (plus workers) for our CI needs were becoming questionable.

Read more »