Tagged: puppet

Dynamic HAProxy configs with puppet

I’ve posted a little about puppet and our teams ops in the past since my team has pretty heavily invested in the dev portion of the ops role. Our initial foray into ops included us building a pretty basic puppet role based system which we use to coordinate docker deployments of our java services.

We use HAProxy as our software load balancer and the v1 of our infrastructure managment had us versioning a hardcoded haproxy.cfg for each environment and pushing out that config when we want to add or remove machines from the load balancer. It works, but it has a few issues

  1. Cluster swings involve checking into github. This pollutes our version history with a bunch of unnecessary toggling
  2. Difficult to automate swings since its flat file config driven and requires the config to be pushed out from puppet

Our team did a little brainstorming and came up with … Read more

Automating deployments with salt, puppet, jenkins and docker

I know, its a buzzword mouthful. My team has had good first success leveraging jenkins, salt, sensu, puppet, and docker to package and monitor distributed java services with a one click deployment story so I wanted to share how we’ve set things up.

First and foremost, I’ve never been an ops guy. I’ve spent time on build systems like msbuild and fake, but never a full pipeline solution that also had to manage infrastructure, but there’s a first for everything. Most companies I’ve worked at have had all this stuff set up already, and out of the minds of developers, but the place I am at now does not. I actually think it’s been a great opportunity to dive into the full flow of how do you get your damn code out the door. I think if you are developing an application and don’t know how it gets from git … Read more

Testing puppet with docker and python

In all the past positions I’ve been in I’ve been lucky enough to have a dedicated ops team to handle service deployment, cluster health, and machine managmenent. However, at my new company there is much more of a “self serve” mentality such that each team needs to handle things themselves. On the one hand this is a huge pain in my ass, since really the last thing I want to do is deal with clusters and machines. On the other hand though, because we have the ability to spin up openstack boxes in our data centers at the click of a button, each team has the flexibility to host their own infrastructrure and stack.

For the most part my team and I are deploying our java services using dockerized containers. Our container is a centos7 base image with a logstash forwarder in it and some other minor tooling, and we … Read more