Category: Snippets

Dealing with a bad symbolic reference in scala

Every time this hits me I have to think about it. The compiler barfs at you with something ambiguous like

[error] error: bad symbolic reference. the classpath might be incompatible with the version used when compiling Foo.class.

What this really is saying is that Foo.class references some import or class whose namespace isn’t on the classpath or has fields missing. I usually get this when I have a project level circular dependency via transitive includes. I.e.

Repo 1/
  /project A
  /project B -> depends on C and A
Repo 2
  /project C -> depends on A

So here the dependency C pulls in a version of A but that version may not be the same that project B pulls in. If I do namespace refactoring in project A, then project B won’t compile if those namespaces are used by project C. It’s a mess.

Thankfully scala lets you maintain folder structure outside of package namespace, unlike java. So I can fake it till I make it by refactoring and keeping the old namespace, until I get a working build and then updating the secondary repo. It’s like a two phase commit.

Scripting deployment of clusters in asgard

We use asgard at work to do deployments in both qa and production. Our general flow is to check in, have jenkins build, an AMI is created, and then … we have to manually go to asgard and deploy it. That sucks.

However, its actually not super hard to write some scripts to find the latest AMI for a cluster and prepare an automated deployment pipeline from a template. Here you go:

function asgard(){
  http ${VERB} --verify=no "$url" -b

function next-ami(){

  prepare-ami $cluster true | \
    jq ".environment.images | reverse | .[0]"

function prepare-ami(){


  asgard GET "deployment/prepare/${cluster}?deploymentTemplateName=CreateAndCleanUpPreviousAsg&includeEnvironment=${includeEnv}"

function get-next-ami(){

  next=`next-ami ${cluster} | jq ".id"`

  prepare-ami ${cluster} "false" | jq ".lcOptions.imageId |= ${next}"

function start-deployment(){

  echo $payload | asgard POST "deployment/start/${cluster}"

The gist here is to

  • Find the next AMI image of a cluster
  • Get the prepared JSON for the next deployment
  • Update the prepared json with the new ami image

To use it you’d do

> clusterName="foo"
> next=`get-next-ami $clusterName`
> start-deployment $clusterName $next
    "deploymentId": "1773"

And thats it!

RMQ failures from suspended VMs

My team recently ran into a bizarre RMQ partition failure in a production cluster. RMQ doesn’t handle partition failures well, and while you can set up auto recovery (such as suspension of minority groups) you need to manually recover from it. The one time I’ve encountered this I got a very useful message in the admin managment page indicating that parts of the cluster were in partition failure, but this time things went weird.


  • Could not gracefully restart rmq using rabbitmqctl stop_app/start_app. The commands would stall
  • Could not list queues for any vhost. rabbitmqctl list_queues -p [vhost] would stall
  • Logs showed partition failure
  • People could not consistently log into the admin api without stalls, or other strange issues even when clearing browsing data/local storage/incognito/different browsers
  • Rebooting the master did not help

In the end the solution was to do an NTP time sync, turn off all clustered slaves (shut down their VM’s, not go into suspension). Once that occurred, the master was rebooted and then it stabilized. After that we brought up each slave one by one until it went green.

Anyways, figured I’d share the symptoms and the solution in case anyone else runs into it.

Installing leinigen on windows

Figured I’d spend part of the afternoon and play with clojure but was immediately thwarted trying to install leiningen on windows via powershell. I tried the msi installer but it didn’t seem to do anything, so I went to my ~/.lein/bin folder and ran

.lein\bin> .\lein.bat self-install
Downloading Leiningen now...
SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc
syswgetrc = C:\Program Files (x86)\Gow/etc/wgetrc
--2015-03-14 15:08:48--
Connecting to||:443... connected.
ERROR: cannot verify's certificate, issued by `/C=US/O=DigiCert Inc/ SHA2 Extended Validation Server CA':
  Unable to locally verify the issuer's authority.
To connect to insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.

Failed to download

Hmm, thats weird. For some reason the cert isn’t validating with wget (that I have installed via Gow).

A quick google showed that this is a common problem using the gow wget, and I wasn’t about to use the unsecured certificate check. I opened up the leinigen installer bat file and saw that it does a check trying to see what kind of download function your shell has. It checks if you have wget, curl, or if you are in powershell (in which case it creates a .net webclient and downloads the target file).

Since I have gow in my path wget comes up first, so I just switched around the order and things now work happy!

The relevant section in the lein.bat file is

rem parameters: TargetFileName Address
if NOT "x%HTTP_CLIENT%" == "x" (
    %HTTP_CLIENT% %1 %2 
    goto EOF
call powershell -? >nul 2>&1
    powershell -Command "& {param($a,$f) (new-object System.Net.WebClient).DownloadFile($a, $f)}" ""%2"" ""%1""
    goto EOF
call curl --help >nul 2>&1
    rem We set CURL_PROXY to a space character below to pose as a no-op argument
    set CURL_PROXY= 
    if NOT "x%HTTPS_PROXY%" == "x" set CURL_PROXY="-x %HTTPS_PROXY%"
    call curl %CURL_PROXY% -f -L -o  %1 %2
    goto EOF
call Wget --help >nul 2>&1
    call wget -O %1 %2
    goto EOF

Once the self install completes now lein is available.

On a side note, I think you probably could have just downloaded the release file and plopped it into the ~/.lein/self-installs folder and it would work too

Tiny types scala edition

Previously I wrote about generating value type wrappers on top of C# primitives for better handling of domain level knowledge. This time I decided to try it out in scala as I’m jumping into the JVM world.

With scala we don’t have the value type capability that c# has, but we can sort of get there with implicits and case classes.

The simple gist is to generate stuff like


case class foo(data : String)
case class bar(data : String)
case class bizBaz(data : Int)
case class Data(data : java.util.UUID)

And the implicit conversions


object Conversions{
    implicit def convertfoo(i : foo) : String =
    implicit def convertbar(i : bar) : String =
    implicit def convertbizBaz(i : bizBaz) : Int =
    implicit def convertData(i : Data) : java.util.UUID =

Now you get a similar feel of primitive wrapping with function level unboxing and you can pass your primitive case class wrappers to more generic functions.

For this case I wrote a simple console generator and played around with zsh auto completion for it too. Full source located at my github

Quickly associate file types with a default program

I use JuJuEdit to open all my log files since it starts up fast, is pretty bare bones, but better than notepad. The way my log4net appender is set up is that log files are kept for 10 days and get a .N appended to them for each backup. I.e.



I hate having to go through each one and set the default program to open since its slow and annoying. A faster way is to use cmd (not powershell!) and use the assoc and ftype commands.

You can associate an extension (like .2) with a “file type” (which doesn’t really mean anything) and then map the file type to a program to open.

For example:

>ftype logfile="C:\Program Files (x86)\Jujusoft\JujuEdit\JujuEdit.exe" %1
>assoc .3=logfile
>assoc .4=logfile
>assoc .5=logfile
>assoc .6=logfile

And now they all open with juju edit. If i ever want to change it I just re-run ftype and all my log files will now open with another program

wcf Request Entity Too Large

I ran into a stupid issue today with WCF request entity too large errors. If you’re sure your bindings are set properly on both the server and client, make sure to double check that the service name and contract’s are set properly in the server.

My issue was that I had at some point refactored the namespaces where my service implementations were, and didn’t update the web.config. For the longest time things continued to work, but once I reached the default max limit (even though I had a binding that set the limits much higher), I got the 413 errors.

So where I had this:

<service name="Foo.Bar.Service">
	<endpoint address="" binding="basicHttpBinding" bindingConfiguration="LargeHttpBinding" contract="Foo.Bar.v1.Service.IService"/>

I needed

<service name="Foo.Bar.Implementation.Service">
	<endpoint address="" binding="basicHttpBinding" bindingConfiguration="LargeHttpBinding" contract="Foo.Bar.v1.Service.IService"/>

How WCF managed to work when the service name was pointing to a non-existent class, I have no idea. But it did.

Short and sweet powershell prompt with posh-git

My company has fully switched to git and it’s been great. Most people at work use SourceTree as a gui to manage their git workflow, some use only command line, and I use a mixture of posh-git in powershell with tortoise git when I need to visualize things.

Posh-git, if you load the example from your profile, will set the default prompt to be the current path. If you go into a git directory it’ll also add the git status. Awesome. But if you are frequently in directories that are 10+ levels deep, suddenly your prompt is just obscenely long.

For example, this is pretty useless right?

2014-07-09 11_53_20-

Obviously it’s a fictitious path, but sometimes you run into them, and it’d be nice to optionally shorten that up.

It’s easy to define a shortPwd function and expose a global “MAX_PATH” variable that can be reset.


function ShortPwd
    $finalPath = $pwd
    $paths = $finalPath.Path.Split('\')

    if($paths.Length -gt $MAX_PATH){
        $start = $paths.Length - $MAX_PATH
        $finalPath = ".."
        for($i = $start; $i -le $paths.Length; $i++){
            $finalPath = $finalPath + "\" + $paths[$i]

    return $finalPath

In the posh-git example, make sure to load your custom function first, then change

Write-Host($pwd.ProviderPath) -nonewline


Write-Host (ShortPwd) -nonewline -foregroundcolor green

(I like my prompt green)

Now you can dynamically toggle the max length. I’ve set it to 5, but if you change it the prompt will immediately update:

2014-07-09 11_57_40-posh~git ~ powershell_scripts [master] (Admin)

For this and other powershell scripts check out my github.

Logitech mx mouse zoom button middle click on Ubuntu

Any good engineer has their own tools of their trade: keyboard, mouse, and licenses to their favorite editors (oh and a badass chair).

I work now on an Ubuntu box and I wanted to get my logitech MX mouse’s zoom button to act as middle click. I really like this functionality since its easy to copy, paste, close windows, and open new links with this button.

However, the button mapping in Ubuntu isn’t trivial. On windows you used the setpoint program to do it and called it a day. But in linux land you need to put more work into it.

Here is a great tutorial describing how to do it, but for the lazy, here is the mapping you need.

"xte 'mouseclick 2'"

What this says is “when button 13 is clicked, then released, issue a mouseclick 2 command”. xte is a program that simulates mouse and keyboard events, and xbindkeys (whose config you edit to set the xte mapping) is a program that lets you bind one key or mouse event to another key or mouse event.

Once I did this and started up xbindkeys then my zoom button (button 13) now worked as middle click (mouseclick 2).

Pulling back all repos of a github user

I recently had to relinquish my trusty dev machine (my work laptop) since I got a new job, and as such am relegated to using my old mac laptop at home for development until I either find a new personal dev machine or get a new work laptop. For those who don’t know, I’m leaving the DC area and moving to Seattle to work for Amazon, so that’s pretty cool! Downside is that it’s Java and Java kind of sucks, but I can still do f#, haskell, and all the other fun stuff on the side.

Anyways, since I’m setting up my home dev environment I wanted to pull back all my github repos in one go. If I only had a few of them I would’ve just cloned them by hand, but I have almost 30 repos, which puts me in the realm of wanting to automate it.

As any good engineer does, I did a quick google and found that someone had written a ruby script to clone all of a users repos using the github API. However, the script is outdated and the github API has changed. It no longer uses YAML, but now JSON, and the URL’s are all different.

So, here is the updated script:

#!/usr/bin/env ruby

require "json"
require "open-uri"

username = "devshorts"

url = "{username}/repos"

    repo_url = repo["ssh_url"]
    puts "discovered repository: #{repo_url} ... backing up ..."
    system "git clone #{repo_url}"

Unlike the original script, this will clone to the current directory you are in using the same name of each repo (so no renaming it during the clone).

There are a ton of other backup options, but this was fun and simple (and a good way to get me back into using vim)