Search This Blog

Saturday, July 25, 2009

If only we learned from our past - Vishnu and Darwin

There is a famous quite by a Spanish philosopher George Santayana - "Those who do not learn from history are doomed to repeat it" - very true indeed. We humans seem to have a penchant to not learn from our past experiences, and more importantly from those of others - be it our daily lives, or our civilization. In most cases, we tend to ignore some very interesting thoughts that were penned by our ancestors, discounting them as ancient, irrelevant, or useless.

A prime example is a concept that is engrained in the Hindu religion, called the 'Dasavataram of Vishnu' or '10 avatars of Vishnu'. Vishnu, in Hindu philosophy is the God of protection and is one of the three primary deities (trimurthis) - Brahma (Creator), Vishnu (Protector), and Siva (Destroyer).

The thought is that Vishnu, being responsible for protecting beings in Earth, takes an appropriate form (avatar) once a while (usually once a 'yuga' or epoch) to provide guidance by example. The 10 avatars of Vishnu has been documented heavily in the Hindu scripts - I don't know how far it dates back, but have at least seen them in sculptures that age back to 1000AD and potentially beyond.

What is most interesting to note is that the 10 avatars of Vishnu coincide with Darwin's concept of evolution - ages before Darwin came up with the theory. I was first made aware of the connection when my brother wrote an article on it a few years back. Here are the 10 avatars, their symbolism, and my interpretation of the symbolism.

Source: http://commons.wikimedia.org/wiki/File:Dasavatar,_19th_century.jpg

1. Matsya
Fish
Aquatic (first species to evolve on earth)

2. Kurma
Tortoise
Amphibian (movement from the ocean to the land)

3. Varaha
Boar
Land creature (proper evolved mammal on land)

4. Narasimha
Half man, Half lion
A combination of man and beast (symbolises the transition from animal to human)

5. Vamana
Child
Child (Starting signs of intelligence and thought)

6. Parasurama
Plough-wielding man
Farmer (Starting of agricultural evolution and use of Iron)

7. Rama
King (with bow and arrow)
Fully evolved human (Use of bow and arrows, Evolution of governance)

8. Krishna
King (and peacemaker)
War and Peace (Evolution of politics)

9. Buddha
Saint
Peace (Establishment of a peaceful system. Emphasis on current life instead of moksha)

10. Kalki
Man on horse
??? (supposed to take form by end of this current epoch - in some sense signaling apocalypse and facilitating humans beyond the apocalypse)

As with any interpretation, you might say that I am fitting a theory into facts than the other way around, but I feel that the coincidence is just too strong to dismiss it as mythology. I wonder how many other such concepts are hidden in the earlier scriptures that we have discounted as mythology or paganism.

Also of interesting note is that out of the 10 avatars, only 3 are exclusively worshipped (Rama, Krishna, and Buddha - all of them in the most recent epochs) and 2 in some areas of India (Narasimha and Varaha), while the remaining 5 are more of historic interest.

A head nod to Indexed

A few months back, I was looking at books on visual thinking, inspired by Garr Reynolds' Presentation Zen and Dan Roam's Back of the Napkin and came across Jessica Hagy's Indexed.

At that time, while I found the book to be fairly intriguing, I didn't find it directly applicable to what I was looking for, as it did not have any instructions on how to be a visual thinker. Nevertheless, I put her blog in my Google Reader.

After having had a good background on visual thinking, thanks to the books I mentioned above, I look back and see the subtlety and depth in Hagy's diagrams. I am amazed by the level of creativity she has and the seemingly effortless way in which she can portray complex concepts in a simple x-y chart or a Venn diagram.

I thought I might as well make an attempt in creating such an index myself and here are a couple that I could think of.



I have been having interesting discussions with my Dad about religion and spirituality. He has his strong views, him being a devout religious person, and I, mine, trying to find a balance. While I won't call my diagram accurate, as the topic tends to be subjective in nature, I could see how creating a simple diagram can at least make thoughts a lot clearer and much more easy to understand. No wonder they said that a picture is worth a thousand words! It truly is.

Thursday, July 16, 2009

Performance Tuning in Java

Recently, a friend of mine asked me on how he can improve performance in an open-source Java application that he was working on. I was reminded of the past (i.e., circa 2003) when Java was being bashed for poor performance and when a number of books were written on improving performance of all most all parts of Java.

While it is true that an application running within a container will be naturally less performant than a native application, the question one should ask is whether the optimal performance they are getting is good enough for them, as opposed whether it is the best. When it comes to performance, in most cases, 'works for the situation' is more useful than being the fastest horse in the racetrack.

Coming back to the issue in question, I think there are some basic steps one can take to drastically improve the performance of a Java application with relatively minimal effort. In this case, the application was a data extraction tool that was writing thousands of small (1 - 5KB) XML files. Based on this, the following recommendations can be provided.

JVM Garbage Collection parameters
If the size of the XML is small, it is reasonable to assume that the application may be using DOM instead of SAX. As DOM is an in-memory XML model, it would create a lot of objects that live for a short period of time.

By default, a JVM has two parameters - Xms and Xmx, which is the minimum or start-up memory the JVM will use and the maximum amount of memory it can possibly use (beyond which an OutOfMemoryError will be thrown). By default, Xms parameter is around 40MB to 64MB for Windows systems and the Xmx is around 128MB - 256MB. The JVM would start with the Xms memory and if it hits the limit, will then keep increasing the memory till it reaches Xmx value in steps. The problem is that this incremental increase comes at a performance cost, especially if you start really low compared to the needs of the application. If you have a reasonable idea of your application's minimum memory requirements, or if your system has enough memory to spare, it's a good idea to boost this to a much bigger number.

In this case, the numbers were boosted to 512MB for Xms and 1024MB for Xmx. Additionally, it is a good idea to increase the -XX:MaxPermSize value from the default 32MB to a more respectable 128MB or even 256MB (which was the value used in this case).

Once these changes were done, the application, which took 1 hour 15 minutes to run, went down to 28 minutes - and all without changing a single line of code!

Choose your OS wisely
The second improvement came not within Java, but outside. This particular application was creating around 300,000 files under a single folder. Windows typically does not handle huge file volumes nicely within a single folder. The optimal value seems to be somewhere around 2,000 files. UNIX based systems on the other hand, have no such issues.

The application, thanks to Java's portability, was moved to a UNIX based system. The running time went down from 28 minutes to 13 minutes - again without changing a single line of code!

There are a number of other tweaks that can be made, and most without changing the actual application itself - hopefully for another blog down the road...

So, the bottom line is, don't blame the language/library without spending some time to fine tune the performance - more importantly, sometimes it just takes a few minutes of effort to make a big difference.

Friday, July 03, 2009

Using Critical Chain for software projects

Earlier, I had posted how the concept theory of constraints (ToC) can be used for effective offshore communication. Those familiar with ToC might wonder why not apply the Critical Chain methodology, which is a project methodology offshoot of the ToC, for the entire software project itself. I've seen this implemented in some companies fairly rigorously for software projects and think that it's a great idea, but with a few caveats.

Critical Chain Methodology
First, a quick primer on CCM. The project methodology that most of us are familiar with, thanks to Microsoft Project, is called critical path - where the time needed to complete a project is based on the longest sequence of events. The focus in this methodology is on tasks and their schedule. Any unknowns are typically factored into each task by means of a slight buffer. For example, if a task would take 2 days to complete, it's typically calculated as 3 days, adding 1 day for potential delays or distractions.

On the other hand critical chain methodology focuses on the resources involved in the project and on the longest chain of resource constraints that would be need to complete a project. The concept is quite good and has been reported to help projects get completed 10% - 20% before the estimated date.

The primary difference between the two is that with critical chain you pool the buffer time for each task rather than include it with the task itself. So, essentially, the schedule is created based on the 'pure' time needed to complete a task (called 'focus' time) and not on the 'buffered' time. All the buffers are then pooled into a 'project' buffer (at the end of the project) or a 'feeding' buffer (at the end of each chain of tasks). Thus, you don't essentially say you'll complete the project at a certain date, but rather within a range, where the end date equals the date you'd calculate using critical path.

The catch - Consultants and Offshore teams
This all sounds good. So, what's the catch? If your project is completely onshore and does not use any consultants, then there is almost no catch. The concept works very well once people are trained to understand the methodology -which is crucial to the success of the project. The methodology itself has its roots in manufacturing, where these assumptions are mostly true.

The problem comes when you introduce either consultants or offshore teams, which is mostly true in case of software projects. As I mentioned earlier, CCM is a resource-based methodology and hence, has a strong emphasis on the resources and their linkages. When offshore teams are included in the mix, the problem arises due to time difference. Since the project plan itself is time-based and on dependencies between resources, it is difficult to capture the dependencies incorporating the time difference as well. While this is not impossible to capture, it just needs a bit more planning than usual to include this. This is in some sense true even for Critical Path plans, but becomes a bit more apparent here due to the emphasis on resources.

The second issue is when you include consultants. In CCM, resources would constantly shift tasks based on what is more important at a given point in time (as the critical chain would keep moving depending on the constraints at any given day). This is fine if all the resources are part of the same company.

However, let's say you have one consultant and one employee working on two dependent tasks. If the employee decides to move to another critical task for a day or two and not worry about the task that the consultant is dependent upon, what should the consultant do for the two days? Therein lies the issue. Ideally, you have to pay the consultant for sitting around because he's just dependent on another resource, but it may be hard to digest for the sponsor and the knee jerk reaction might be to ask them to do something 'useful', which may in turn, affect some other activity. This becomes more complex when multiple resources and projects are involved.

The other issue on the consultant's side is to create an invoice for the project. Usually, project plans are created where resources start low, ramp up, and finally ramp down close to launch, so as to minimize the overall cost of the project. This is fine as long as the project length is fixed and you know roughly when to on-board or roll-off a resource. However, with critical chain, the project timeline is a range and is not fixed. So, you'd have to create a range in your invoice potentially with an 'early completion' bonus or something similar, because you won't be sure exactly when to on-board or roll-off resources. This becomes messy when some high-value resources are in demand on other projects that don't use Critical Chain.

So, the bottom line is that Critical Chain is a great methodology and one that works really well. However, when brought into a software project, the three constraints - 1) consultants 2) offshore team, and 3) training (for both employees and consultants) must be considered before implementing it. Otherwise, you'll have a lot of headaches.

Three Box Principle

As a software architect, my primary job is to define an architecture, framework, or a platform for my clients. The expectation is that the framework or platform will be generic, modular, and flexible as soon as it's created. It does not work that way.

For the sake of this article, I'll use the term 'framework' to define frameworks, architectures, and platforms and the implementations you would do on top of them as 'applications'.

When I started my career as a software consultant, my first gig was at Bell Labs - the famed R&D wing of Lucent (now Alcatel) technologies. As a young Java programmer imbibed with GoF patterns and other similar literature, I was ready to spit out architecture all around me! When I attempted one such 'framework' for the project I was working on, I soon realized that my framework was not as flexible or as modular as I had hoped it would be.

One day, while sitting at the library in Bell Labs, I came across a book (or article) that talked about the three box principle. The principle essentially is that it takes at least three attempts or revisions to make a framework generic.

I have attempted to reproduce the principle in the diagram below.

The principle has an important corollary - "You cannot build a framework without building sample applications that utilize or implement the system." Most IT teams tend to create an "architecture" team who would create architecture by considering the overall goal of the company and then impose it on the applications used within the company. Such elitist attitude is doomed to fail, as it does not consider the ground realities of the applications and eventually the maintainers of the applications will tend to find ways to circumvent the architecture rather than use it, beating the purpose of the architecture in the first place.

In contrast, the best way to build a framework is to work with the applications, including them iteratively while developing the framework itself, which is what the principle suggests.
By the principle, in the first box or phase, you pick one or two candidate applications and build them. At this point, you are not worried about the framework. You are just developing the applications with a vision of the framework in the background, but not really doing anything conscious about it. This phase should then be deployed and field-run for at least a few cycles (months, weeks, etc.)

In the second box/phase, you pick few more applications and try to add them to the first set of applications. At this point, some patterns will begin to emerge due to the commonalities between the applications. You still don't consciously build the framework, but rather refactor the common libraries and features so that they are more modular. You might end up following principles of inversion of control around this time. By this time, you would have also seen the feedback from real customers and noticed the pain points of the initial applications and would adjust the features accordingly.

In the third phase, you add some more applications to the system. At this stage, there will be enough common functions/features that you can modularize further and see a much stronger pattern that would enable you to separate out the framework and setup the applications on top.

I have personally seen both sides of this principles - the success stories when this was followed and the failures when it was not. So, next time when someone asks you to build a framework, try to set the expectation that frameworks are more like a pastry from a master chef rather than a frozen microwave dinner. It takes time and multiple tries to make the end-product perfect.

PS: I believe the book I read was Implementing Application Frameworks by Mohammed Al Fayad, although I couldn't find a reference in the book. Maybe it was an article from the same author - I am not sure. If anyone has read this article, I would greatly appreciate it if you can drop in a comment with the right source.