Search This Blog

Monday, August 17, 2009

The developer's blame hierarchy

Having worked on a reasonable number of software projects, I come to realize that a sizable chunk of productivity loss is due to inappropriate placement of blame. In a team setting, it is natural to have confrontations and consequently, the mentality to be defensive, even if everyone is working towards the same common goal.

As a result, I developed a blame hierarchy that I normally mandate to my team before the start of a project and ask them to follow the hierarchy before raising the issue further. While it's true that there are some exceptions where the hierarchy fails, it works more often than not.

As per the hierarchy, if a developer finds an issue, the blame should be placed at a step only after clearing all the previous steps.
  1. Your module
  2. Your automated tests (especially if you didn't update the tests when the logic changed)
  3. Your team mate's module
  4. Stable custom framework
  5. 3rd party framework
  6. 3rd party libraries
  7. Network connectivity/Firewall rules
  8. Database
  9. JVM / Virtual Machine
  10. Operating System
  11. Infrastructure / Environment
Failure to follow this hierarchy, mostly because of "My code cannot be wrong" mentality, will result in significant loss of productivity, not to mention irritation, annoyance, and outright anger.

Also note that the hierarchy should be followed only after due diligence has been performed, such as checking logs and looking for error messages.

5 comments:

Anonymous said...

Hi Sathya - I know you from HFA and BE days.

In reading through your hierarchy, I am surprised that you did not list "Automated Developer Tests" in your blame hierarchy. I would list it as number 2 - after my module and before my colleague's module.

But of course everyone knows the real blame always falls on the project's M.D. ;)

--- You Can Edit Out From Here Below ---

Post a comment here:
http://steligius.blogspot.com/2009/07/why-you-should-try-test-driven.html

I have comment moderation on so if you leave me an email address, I will not publish it.

Unknown said...

I guess I can try to squirm out by saying that it's covered under "your module", but you are right. If there are a good set of test cases, then it is important to ensure that the test cases are updated along with the logic, otherwise you might be breaking your head over right code that fails on the wrong test.

I have updated the hierarchy. Thanks for the feedback.

Vasu Ramanujam said...

very interesting perspective!

But I find this trivial, compared to the political blamegame that happens between:

1. Dev Lead Vs BA ( reqs not clear, not on time, not prioritized etc)
2. Test vs Dev ( forget about Automation, simple functionality testing of an app can lead to lots of confusion, dev deployed it in Test env without doing dev tests, Test not aware of new functionality which was added adhoc etc)
3. Lead BA vs Project Manager ( not on schedule, wrong estimates, interlock apps not considered etc)

4. PMO vs Delivery Manager ( No metrics, no processes in place, schedule slippage, No compliance to CMMs, Audits etc)

5.IT org vs IT Org ( Generally happens at Director level...WHy should we own this application/NOt our problem;Let the OTHER team do the dirty job etc etc)

6. Business Unit vs IT ( happens at the VP levels..delayed delivery, IT does not understand business priorities, No ROI, No roadmap planning )

7. Business Unit Vs Business Unit ( well, sometimes, due to wrong structuring of the organization, different business units vie against each other and IT becomes the scapegoat!very dreadful scenario)

I believe these factors have much more influence on the success ( or failure)of a project and is embedded in the DNA of a company...

Good colloboration among this chain of command will determine the success of an organization.

Unknown said...

@Vasu: I agree with the more complex political dynamics in a project.

This hierarchy is aimed more at the development team in a project and their typical reactions I've seen over time (and ones that I've made when I was a developer!)

Often times, a lot of time and resources have been wasted during the development/testing cycle because of wrong diagnosis, typically stemming out of the inherent egoism of the developer that his/her product is perfect.

Examples are

"The program is slow. (It's not because I may have a stupid loop but because) The database must be messed up. We need to recreate indexes."

"It works fine on my machine. (I don't think it's a problem due to the way I implemented multiple threads in my code, but) I think the other environment is messed up.

and so on.

Vasu said...

From my experience, its always the data that is wrong :-)