Monthly Archives: December 2010

Four Principles of Effective Software Teams

Back in May, I was at a talk presented by Kiril Savino of Gamechanger. Mr. Savino discussed both the product they are building, the stack they use to build it, and the process they use in building it. He noted that there are several processes out there, but all effective software processes include the following four principles:

  1. Iteration
  2. Communication
  3. Backlog
  4. Automation

Having had sufficient time now to understand these points, here is my take on them:


The principle of iteration is not something that is unique to software. I think everyone was taught in grade school to do a rough draft and work to improve it rather than try to do an immaculate first draft. The first draft of something will always have problems, no matter how well you try to plan before hand. To make matters worse, until you see the product, you will not know in what ways it is important to improve the work. Every first draft also has some good things in it too, and it is hard to see what works well and what does not until you have something tangible in front of you. Being able to work in quick iterations helps to maximize the good while minimizing the bad over the course of a software project.


Communication comes in two flavors: verbal and written. Written communication’s strength is that no one has a perfect memory, and by putting down what two team members agreed to, there is a decreased chance of misunderstanding between them. The obvious benefit of written communication is that there is a piece of paper with what two people agreed to. Less obvious is that the act of putting the communication down on paper forces the writer to be more precise and to think out what was said in more detail. I have found that when I write something, whether it is an email, design document or blog post, I find inconsistencies in the way I think about a problem that I don’t see until I’m staring at them on a computer screen, and writing them provides an opportunity to correct my misunderstandings.

Verbal communication’s strength is that it is interactive. Whereas a written document can potentially cover the “wrong” set of details, in a conversation one person can tell the other what he does and does not understand. No document, except for the software itself, contains every detail about how a piece of software works , and team members need to talk when they have different views about a requirement or design constraint.


Giving the developers on a team a backlog helps them in lots of ways. From a technical standpoint, knowing what features are coming down the pipeline helps to plan what technical infrastructure the team will need and what the team needs to do to train itself. It also helps in terms of morale. Without a backlog describing to the team the direction a product will take before its release, it is not ever clear if the team is approaching the goal not. Another way having a backlog helps is that future requirements are coming from a well understood set, and are not being written the day a sprint starts.


Automation can fundamentally change how a team collaborates. For example, Mr. Savino mentioned in his talk doing automated deployments, which is becoming a common practice. Automated deployments make the feedback loop between developers and those giving feedback (for example, the product owner, QA, customers, or the CEO) much shorter, which makes correcting the software less expensive. It also makes deployments easier, since a developer or system administrator doesn’t have to take his time to manually go through an error prone checklist that might be out of date.

Automated tests are another example of automation that changes team collaboration. If a developer writes a piece of code and someone has problems with it, he has to interrupt the flow of what he is working on, take a look at it, understand the problem, and figure out if it really is a problem with his code or how the code is being used. If, however, that same developer writes a piece of code and the tests to go along with it, then when a bug occurs the team can more easily narrow down the cause of the bug.

Organize 5000 Emails in Two Steps

This morning, I had over 3000 emails in my personal gmail inbox. Some were read, some weren’t. Today, I got up the courage to do the previously unthinkable –

  1. I selected all
  2. I hit the “archive” button.

It was such a good feeling, that I did the same thing with the approximately 2000 messages in my work email. If you have several thousands of things in your inbox which you will never touch again, I recommend you do the same. It is very freeing.

The inspiration behind this bold movement towards being more organized is a book I just finished reading (“Pragmatic Thinking & Learning” – PT&L). One of the gems listed in there was something from David Allen’s Getting Things Done:

  • Scan the input queue only once
  • Process each pile of work in order
  • Don’t keep lists in your head

Like many engineers, I was on top of the third one – my desk at work has several sticky-notes attached to various surfaces, we use Jira for bug tracking, and I have to-do items jotted down in notepads. The act of cleaning my inbox put me in a position to do the first two points – it is really hard to classify things when you are looking at that many. When you only have to look at 10 or 15, categorizing them by what you need to do now, what you need to ask someone else about, what you just need to be aware of, and what you can ignore, becomes possible.

Since I was conscious about what I was doing with each email I went through, I had another breakthrough – a lot of email (at least, email that I get) is more “FYI”-kind of things. By adopting a practice Hunt mentions – creating a private wiki for use as an “exocortex” – I suddently had a place to put the several links I get sent from friends, colleagues, and newsletters I subscribe to. I no longer felt like it was a big deal to “archive” something, since the links I care about are now on a wiki page that only I can access, categorized and labeled, and thus much easier to find than they would be if they were still in my inbox.

I highly recommend this book – it has many “Aha!” moments, it is a “technical” book you can actually discuss with non-technical people, and it makes the reader see learning, thinking and working in a whole different way.

Measuring JavaScript Code Quality

I do most of my programming in Java. Java benefits from tools like PMD, Checkstyle, several test code coverage tools (Cobertura, Clover, Emma), and so on. Each of these tools individually is helpful, but aggregating these metrics with Sonar gives an addition level of transparency to a team. Sonar becomes even more powerful when you deploy the Technical Debt plugin, which gives an approximation of your technical debt. Sonar is a frequent topic for discussion at the Agile Executive blog.

For all of Sonar’s great qualities, there is one major draw back – it currently has no support for JavaScript. Almost every web application today, regardless of how the back-end is built, uses JavaScript. I have started an open source project called JS-Quality whose goal is to provide the needed Sonar support for JavaScript. To get to a 1.0 release, we will have 3 major phases of development:

  1. Write code to generate all the metrics needed for Sonar (or at least the important ones). Some metrics may not translate well, but we want to make sure to get the big ones – code complexity, rule compliance, and unit test code coverage.
  2. Provide non-Sonar reports for projects through a Maven plugin for generating reports
  3. Full integration with Sonar.

By the time we have a 1.0 product, we will have an imperfect but very useful tool for identifying problems with JavaScript code and for computing the full technical debt of a project. The project is up on github, and any contributions are welcome.