Posted by & filed under aws, CI, Deployments, DevOps, Docker, Web development.

If you’re anything like me, you’ve found that configuring your application using environment variables works really well because it forces you to build an environment-agnostic system that can be deployed to anywhere by adjusting a set of variables that are read by your application at run-time.

 

This strategy is in-line with 12-factor apps and also works very well with containerized applications.

If you’re anything like me you’ve also found that once you have a local development environment, a CI/CD platform, development, staging, and production environments, and other developers on a team, propagating environmental changes and keeping your configurations sane is a headache.

I released a product today called Environr that aims to solve this and also adds some functional improvements on top.

Over the past few years I’ve seen good and very bad practices in this area:

  • Secret keys accidentally being committed to an env/prod.rb file
  • URLs being hardcoded into conf/django.py limiting deployments to a single domain name
  • Production environment variables set in Elastic Beanstalk environments without any way to prevent accidental changes
  • Copying and pasting an .env file back and forth through Slack

Fortunately some developers solve this challenge by using services like Consul, etcd, and Kubernetes ConfigMaps — and while this is more complex than hardcoding configuration values into code, these services provide reliable, distributed, configuration stores for critical applications.

So where does Environr fit into this? Well, there’s still the headache of manually updating environment variables across all of your different platforms anytime a change is made. Plus, managing a distributed key/value store in your own infrastructure doesn’t always mesh well with every product, especially for projects hosted on a platform like Elastic Beanstalk where every container instance must run the same containers. In addition, not all developers are running those services on their local machines during testing, which again thwarts the efforts of having consistent deployments.

With Environr, you authenticate yourself with GitHub, create API keys/secrets for whomever needs to access your configurations, download the CLI tool, and then either import your existing configurations or create them in the console. Then, the CLI tool gets access to environment variables that are all managed in one central location. Update the config in one place and it propagates to where it needs to be.

The easiest way to configure the CLI on your local machine is to run environr-cli configure. This will prompt you to enter your API key and secret to avoid it being part of your shell’s history, and then an optional “profile”. The configuration process writes a YAML file to ~/.environr/credentials so you can manage more than one account by giving the --profile flag to environr-cli.

The CLI can also be configured by setting two environment variables ENVIRONR_API_KEY and ENVIRONR_API_SECRET which effectively bootstraps your environment to then be managed by environr-cli. This works best in CI platforms like CircleCI and AWS CodeBuild and services like Elastic Beanstalk.

Once configured, the current CLI commands are just environr-cli import and environr-cli env, the former for importing or changing existing configurations and the latter for injecting a configuration into the shell or writing it to a file for use with Docker’s --env-file flag.

There will be a follow-up post for more use case scenarios and a better walkthrough, but until then, feel free to checkout the documentation.

p.s. Environr also comes with a cool “Lock” feature that lets you lock a configuration so no one can accidentally change your production environment variables.

Happy Configuring!

 

 

Posted by & filed under DevOps, Web development.

In light of the surge of DevOps consulting requests over the past several months, it’s my pleasure to introduce a new consulting service:

 

FortyFeet will offer short (1-hour) phone calls with DevOps experts to help solve pressing issues, systems design considerations, and general development and operations advice, scheduled in such a way that gives more clients greater access to advisory resources that would otherwise require expensive contracts with tech firms or wasted time screening candidates from freelancing websites.

Start-ups can leverage this service to avoid costly early mistakes and make sure they’re engaging with both a modern and stable technology stack that can scale as they grow.

Stable, revenue-generating organizations can also benefit by verifying their existing design decisions and leaning on FortyFeet to help modernize their current operations.

Interested? Schedule a call!

Posted by & filed under CI, Deployments, Docker, Web development.

CircleCI has quickly become one of the dominating continuous integration platforms available today, boasting support for Docker, Android, and now iOS through the use of OS X virtual machines.

GitHub has been the de facto standard for hosting Git repositories for some time now. While worthy competitors do exist, GitHub‘s ability to innovate around a piece of freely available version control software adds significant value to its brand.

While both CircleCI and GitHub technically work in their own light, their combination into a software development workflow creates a valuable synergy.

Since there are many ways to setup a development workflow, this article will describe one particular way that we’ve integrated both GitHub and CircleCI into our workflow. There may certainly be other ways that work better for your team, so only use this brief guide as one example.

Our development flow is very similar to http://nvie.com/posts/a-successful-git-branching-model/ — with the exception being how we integrate CircleCI. Our circle.yml file looks a bit like this:

test:
  # Test overrides

deployment:
  development:
    branch: dev
    owner: REPO_OWNER
    commands:
     - tools/deploy.sh
  rc:
    branch: /release-.*/
    owner: REPO_OWNER
    commands:
      - tools/merge_master.sh
  production:
    branch: master
    owner: REPO_OWNER
    commands:
      - tools/deploy.sh

After our tests complete we have three separate deployment sections that handle unique scenarios based on the branch being built.

dev branch

Each time our dev branch has a feature branch merged into it, a build is triggrered, tests are run, and a deploy script pushes our Docker images to our private registry. Once that process completes, the same script deploy the Docker images to our Elastic Beanstalk environment

release branches

We use release branches to prevent our master branch from containing untested code. When we’re ready to ship, we use our Slack bot, Hubot, to create a new branch from a recent commit and prefix the name with “release-“. Instead of actually deploying anything, this deployment section runs our merge_master.sh script which merges the release branch with master and pushes it to GitHub, triggering a build on master.

While this does add an extra step and a bit of extra build time, it ensures that our master branch is always tested and deployable at any given time. Release branches also give us the opportunity to easily deploy releases to our QA environment for testing and create an easy path for patches.

master branch

Our master branch is ultimately responsible for deployments. Once our release branch is merged, a build on master is triggered, and master successfully builds, the Docker images are pushed to our private registry and a new Elastic Beanstalk version is created.

When we’re ready to ship, a command to Hubot deploys the given Elastic Beanstalk version to our environment.

Protecting master

Our workflow dictates that master should always be stable and deployable. To strengthen this concept we protect our master branch in GitHub which prevents force pushes and merges from untested branches from occurring.

The combination of GitHub and CircleCI provides us with a clean and relatively simple workflow that supports staged deployments, a stable master branch, and continuous integration.

Posted by & filed under CI, Deployments, DevOps, Web development.

It’s always wise to keep a non-git-log changelog in your code repository. A changelog provides a way to explain what changes were made in a more verbose and reader-friendly way than a terse commit log. Even more valuable is providing this information to your team.

While anyone can certainly look at your repository and view the recent additions to the changelog, if you’ve already integrated Slack into your workflow, it’s incredibly simple to leverage that same communication channel to broadcast the latest updates to your changelog.

Where ever your code deployments occur (we use CircleCI) a simple script that reads the top of your changelog and publishes that to a Slack channel can be really valuable.

While there are some Linux utilities that can do similar things, we use a short Python script that prints out the lines from the start of a file until it reaches a double newline:

#!/usr/bin/env python

import sys

line_list = []

for line in sys.stdin.xreadlines():
    if line == '\n':
        print ''.join(line_list),
        exit()

    line_list.append('{}\n'.format(line.strip()))

 

We send the output of this to our Slack channel at deployment time (a simple HTT POST to our webhook URL), alerting everyone in the channel to the updates.

 

Posted by & filed under aws, DevOps, Docker.

When you read about Hubot you read about HerokuHeroku seems to be the defacto hosting environment for Hubot, so much that even Hubot‘s own documentation makes reference to hosting it on Heroku — mainly because it can be setup in a snap and is free as long as you keep your usage levels on the low side.

Read more »