Lean Start Up

I read The Lean Startup a few weekends ago and it got me excited for developing in a lean way, what is the lean way? Well the way I understood it, it was trying to fail as fast as you can so you can learn from those failings and proving what you’re doing matters with actionable metrics.

It was in general light on concrete processes but the theory has changed the way I approach working on my personal projects, before I strived for technical perfection before I would release it, I’ve realised this is a mistake as it meant I was developing in a silo for many months sometimes years, often building features no one wants. Now I aim to build the minimal to validate my hypothesis that users want this feature and back it up with the data.

The data side of things is new for me as I don’t specialise in analytics but it’s quite interesting to understand how you can prove something analytically with control groups. So I’ve started to read The Lean Analytics which is a book in The Lean Series and also ordered Running Lean which I may end up reviewing sometime.

Anyhow to sign off, if you haven’t read The Lean Startup, definitely read it, if you’ve read it but can’t remember it, I recommend you read it again.

Development environment

I’ve been spending a lot of time recently getting my development environment setup, I’ve found once I’ve learnt a better way to do things, I won’t want to go back to the old way. So when I learnt about Puppet, I didn’t want to go back to a shell script or manual installation. But these tools take a lot of investment to setup and get right, then you may find out about a complementing technology such as hiera or facter, which you then have to go back to the drawing board and re-iterate.

Finally I am satisfied with my development environment such that it’s easy to work on all the different technologies my projects use from static, node, ruby and php projects so I thought I’d share it on my blog.

My development environment; built with Vagrant, Packer, Puppet, Hiera.

Multi-factor authentication

I recently enabled multi-factor authentication for both my Google account and LastPass account, it’s amazing that I’ve held out for so long given the associated risk with losing either accounts. However after investigating multi-factor authentication solutions again, the backup options really sealed the deal, I don’t know if they recently added support for them or I simply didn’t invest enough time previously.

Google provides the ability to generate a collection of random numbers which can be used once as an authentication code, as well as the ability to use multiple phone numbers incase your number is inaccessible. For the applications which don’t yet support multi-factor authentication but uses a Google account, you can generate an application specific code.

LastPass uses Google for multi-factor authentication however supports many other provides including YubiKey, Toopher, Duo Security, Transakt all of which I’ve never heard of before. It also allows you to generate a one time password which will grant access to your account if you forget your password. Finally you can export your passwords to CSV and print it off if you’re using randomly generated passwords which you don’t remember.

In case you don’t use multi-factor authentication I highly recommend it, I also recommend using LastPass as the idea of using unique passwords for multiple sites becomes cumbersome so I usually end up defaulting to five passwords with varying levels of entropy. In case you don’t like the idea of storing your passwords in the “cloud” I hear 1Password is also very good.

2013 In Review

This is the first time I’ve written a review of the previous year, possibly because I don’t write often as you can see by the lack of blog posts. I decided to do it so I evaluate last year and see if this year I have achieved the goals I have set out to do, so here is 2013 in a review.


For the first time in my career I found myself without a mentor, someone who wasn’t just superior by rank but also knowledge. This was due to the fact that the development manager left and he was not replaced, instead two specialised lead developers were chosen to fill his role. Unfortunately this didn’t work out perfectly as certain areas of which he managed did not get picked up and therefore became unmanaged with no one wanting to take ownership of it.

Although I do discuss ideas and problems with the lead developers as well as the other developers, it is not often that I can learn something new from them of which I did with the development manager. This is disappointing as I feel when you have someone who you learn from frequently, you have an accelerated development, so this was hard for me. Instead I have had to search more for knowledge else where be it blogs, articles or even podcasts.

Side projects

I started a few side projects in 2013, a game which was a 2D top down RPG, a mobile application to track 1001 beers to try before you die, a macaw-esque tool and finally a collaboration between some of the developers at work to create a idea crowdsourcing platform.

I am disappointed with how little I’ve achieved on side projects this year, although valuable knowledge was gained by developing them I would have liked to release something. I did manage to get the landing page of the idea crowdsourcing platform completed but that’s hardly an achievement.

Development Environment

Towards the end of the year I realised that not all the developers had the same development environment and because of this, occasionally we got the “it works on my machine” problem. We normally use a guide to setup our development environment with Macports but this is quite long, tedious and error prone so some developers opted for MAMP. I wanted an easier solution, so I started looking for one, I found Vagrant and decided to start playing around with it.

After writing a thousand line plus bash script to provision my virtual machine I realised this wasn’t portable as it required the developers to have intricate knowledge of all the commands, what I needed was a domain specific language. So I did some investigation and after a discussion with the IT team it was decided on Puppet because of the support and maturity.

Again, I invested many hours learning Puppet and writing manifests and classes to build our development environment and finally it’s in a good enough state to deploy across the developments. We’ve got a few developers using it however I am unsure whether I will invest my time in it with the current state of providing hosting undecided.


Side Projects

I want to see at least one side project launched this year and a landing page does not count. I know it’s not hard, but I am a perfectionist, I want to ensure it’s perfect but hopefully my reading will help me get over this problem.

Development Environment

I want to further improve our development environment by using Packer to build a pre-configured base and also to ensure our bases are all the same. I am half way through this with support for building Vagrant boxes, I just need to add the Amazon support and we should be done.

I also want to investigate using Docker as there is some work being done to make it work nicely on OS X, so we’ll see how that goes, if it provides to be a viable alternative to provisioning an entire VM it’ll mean less overhead.


One of my personal goals for this year is to read twelve books in the year, I know it’s not a lot to some people but keep in mind I read no books last year, I sourced all my information online. I will be particularly focusing on non-fiction but I may get a chance to read a few fiction on the way, the first book I’m reading is Rework by 37 Signals so maybe I’ll do a review on that

Generating schema graphs with SQLAlchemy and Macports

For a while now I’ve been looking for a tool which I could point to a database and it will generate a schema graph, now I usually use Navicat Data Modeler to initially design schemas which has this functionality but costs a premium of which I could not justify. So I went looking for an open source alternative, unfortunately most of the dependencies that these open source tools required weren’t available in the MacPorts repository or I had difficulties installing them.

Finally someone recommended today SQL Alchemy with SQL Alchemy Schema Display which I have heard of before in my brief encounter in the Python world and I knew it was a solid choice. I was sold and so these are the steps I took to get it installed on my setup

Please note this guide makes the assumption you’re using MacPorts on OS X with Python 2.7 and Pip


SQL Alchemy depends on GraphViz, which is available in MacPorts, but for some reason it requires you to deactivate “nawk” port to install it, why it couldn’t do this automatically and then re-active it is beyond me, perhaps it’s a bug. If you don’t have nawk you can simply run sudo port install graphviz.

sudo port install nawk
    sudo port deactivate nawk
    sudo port install graphviz
    sudo port activate nawk

Database support

Depending on what database you want to connect to you will need to install the required driver, I use MySQL at my place of employment but PostgreSQL personally so I needed both.


To enable MySQL support you need mysql_config in your $PATH variable, in my case MacPorts names mysql_config to mysql_config5 to possibly prevent conflicts. An alias may work, but I decided to create a symbolic link, if anyone knows of a better solution let me know.

sudo ln -s /usr/local/bin/mysql_config /opt/local/bin/mysql_config5
    sudo pip install mysql-python


To enable PostgreSQL support you need to install this pip module, no issues.

sudo pip install psycopg2


Another dependency is pyparsing, however by default MacPorts installs the latest which is 2.x and will not work with Python 2.7 it seems.

sudo pip uninstall pyparsing
    sudo pip install pyparsing==1.5.7


SQL Alchemy Schema Display depends directly on PyDot but PyDot indirectly depends on PyParsing so we need both.

sudo pip install pydot


Finally, the tool that will use all these components.

sudo pip install sqlalchemy
    sudo pip install sqlalchemy_schemadisplay


Now to generate an image of your schema you need to provide a host, engine, database name, username and password. I’ve created a sample script below which I use just populate your details and run “python /path/to/file.py”

from sqlalchemy import MetaData
    from sqlalchemy_schemadisplay import create_schema_graph

    # Database
    host     = 'localhost'
    engine   = 'postgresql'
    database = 'database'
    username = 'username'
    password = 'password'

    # General
    data_types = False
    indexes    = False

    # Generation
    dsn = engine + '://' + username + ':' + password + '@' + host + '/' + database;

    graph = create_schema_graph(
      metadata       = MetaData(dsn),
      show_datatypes = data_types,
      show_indexes   = indexes


That’s it, you’ve now got an image of your database schema with relationships and indexes or data types depending on your configuration. Hopefully I didn’t miss any of the dependencies as when I was installing this it was a lot of trial and error. If you do spot an error contact me on Twitter and I’ll happily edit it with accreditation.