Search This Blog

Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

Monday, November 07, 2011

Coupling and Cohesion in Software and Business

One of the important topics in Object Oriented Programming that is often overlooked or not even considered is that of Coupling and Cohesion. A sound understanding of these concepts can make a huge difference in how you package functionality within your program and how flexible your program can become for enhancements.

When performing source code review, keep coupling and cohesion in mind and ensure that the classes within a package all perform same or closely related activities (high cohesion) and that classes within different packages don't refer each other too much or at least in a well-defined manner (low coupling).

Definitions


Coupling is how inter-dependent two functional components are within your program. High coupling is bad, because if you change one function, you might end up affecting all the dependent functions as well.

Cohesion is how closely the parts within a function work together to make the function as one single, well-defined unit. High cohesion is good, since you can treat the whole function as a black box, thereby abstracting your system for better clarity.

Coupling Types

Type (worst to best) Description
Content/Pathological Coupling When a module uses/alters data in another
Control Coupling 2 modules communicating with a control flag (first tells second what to do via flag)
Common/Global-data Coupling 2 modules communicating via global data
Stamp/Data-structure Coupling Communicating via a data structure passed as a parameter. The data structure holds more information than the recipient needs.
Data Coupling Communicating via parameter passing. The parameters passed are only those that the recipient needs. No data coupling : independent modules.

Cohesion Types

Type (worst to best) Description
Coincidental Cohesion Module elements are unrelated
Logical Cohesion Elements perform similar activities as selected from outside module, i.e. by a flag that selects operation to perform. That is, body of function is one huge if-else/ switch on operation flag
Temporal Cohesion Operations related only by general time performed
Procedural Cohesion Elements involved in different but sequential activities, each on different data (usually could be trivially split into multiple modules along linear sequence boundaries)
Communicational Cohesion Unrelated operations except need same data or input
Sequential Cohesion Operations on same data in significant order; output from one function is input to next (pipeline)
Informational Cohesion A module performs a number of actions, each with its own entry point, with independent code for each action, all performed on the same data structure. Essentially an implementation of an abstract data type
Functional Cohesion All elements contribute to a single, well-defined task, i.e. a function that performs exactly one operation

Saturday, August 15, 2009

Transformation complete - Windows on Ubuntu

As I had blogged a couple of months back, I installed Ubuntu 9.04 on my server and have been quite happy since. The system is very stable, I get all the applications I need for free and don't have to worry about updating them (Ubuntu shows me updates periodically for all applications installed through its package manager - like Windows Updates) and have no dearth of functionality.
However, I still had my Windows installation on another partition - just in case - because, much as Ubuntu is cool, there are still a few applications that need Windows, and my wife hasn't warmed up completely to Ubuntu due to occasional hardware glitches (mouse stuttering for example).
Today I was generally browsing through the applications available via the "Add/Remove..." option in Ubuntu and came across a little piece of heaven called VirtualBox from Sun.
In the Windows world, the one application that I used fairly often was Microsoft Virtual PC. I believe it's one of the better products from M$ and more importantly, it was free. With Virtual PC, you can install any other OS to run 'virtually' on Windows, such as Ubuntu (although it had its issues).
VirtualBox is essentially Virtual PC for Ubuntu, only an even better version. It installed in less then a minute. Then I created a simple Virtual Machine and a Virtual Hard Disk using a fairly familiar user interface, pointed it to my Windows ISO image, and lo and behold, I had a Windows XP running on my Ubuntu in less than 20 minutes. Ironically, I felt XP installed faster on VirtualBox than even on Virtual PC - and the start-up time is roughly 10 seconds!
With this, I think my transformation to Ubuntu is finally complete, with the last piece of the puzzle tightly in place. Now, I can have Ubuntu and Windows harmoniously running. Even better, with Ubuntu's multiple desktop feature, I can simply full-screen Windows XP on another desktop. With that, all I have to do is to flick my mouse to go back and forth between the two OS.
Now, that's what I call comfort!

Friday, July 03, 2009

Using Critical Chain for software projects

Earlier, I had posted how the concept theory of constraints (ToC) can be used for effective offshore communication. Those familiar with ToC might wonder why not apply the Critical Chain methodology, which is a project methodology offshoot of the ToC, for the entire software project itself. I've seen this implemented in some companies fairly rigorously for software projects and think that it's a great idea, but with a few caveats.

Critical Chain Methodology
First, a quick primer on CCM. The project methodology that most of us are familiar with, thanks to Microsoft Project, is called critical path - where the time needed to complete a project is based on the longest sequence of events. The focus in this methodology is on tasks and their schedule. Any unknowns are typically factored into each task by means of a slight buffer. For example, if a task would take 2 days to complete, it's typically calculated as 3 days, adding 1 day for potential delays or distractions.

On the other hand critical chain methodology focuses on the resources involved in the project and on the longest chain of resource constraints that would be need to complete a project. The concept is quite good and has been reported to help projects get completed 10% - 20% before the estimated date.

The primary difference between the two is that with critical chain you pool the buffer time for each task rather than include it with the task itself. So, essentially, the schedule is created based on the 'pure' time needed to complete a task (called 'focus' time) and not on the 'buffered' time. All the buffers are then pooled into a 'project' buffer (at the end of the project) or a 'feeding' buffer (at the end of each chain of tasks). Thus, you don't essentially say you'll complete the project at a certain date, but rather within a range, where the end date equals the date you'd calculate using critical path.

The catch - Consultants and Offshore teams
This all sounds good. So, what's the catch? If your project is completely onshore and does not use any consultants, then there is almost no catch. The concept works very well once people are trained to understand the methodology -which is crucial to the success of the project. The methodology itself has its roots in manufacturing, where these assumptions are mostly true.

The problem comes when you introduce either consultants or offshore teams, which is mostly true in case of software projects. As I mentioned earlier, CCM is a resource-based methodology and hence, has a strong emphasis on the resources and their linkages. When offshore teams are included in the mix, the problem arises due to time difference. Since the project plan itself is time-based and on dependencies between resources, it is difficult to capture the dependencies incorporating the time difference as well. While this is not impossible to capture, it just needs a bit more planning than usual to include this. This is in some sense true even for Critical Path plans, but becomes a bit more apparent here due to the emphasis on resources.

The second issue is when you include consultants. In CCM, resources would constantly shift tasks based on what is more important at a given point in time (as the critical chain would keep moving depending on the constraints at any given day). This is fine if all the resources are part of the same company.

However, let's say you have one consultant and one employee working on two dependent tasks. If the employee decides to move to another critical task for a day or two and not worry about the task that the consultant is dependent upon, what should the consultant do for the two days? Therein lies the issue. Ideally, you have to pay the consultant for sitting around because he's just dependent on another resource, but it may be hard to digest for the sponsor and the knee jerk reaction might be to ask them to do something 'useful', which may in turn, affect some other activity. This becomes more complex when multiple resources and projects are involved.

The other issue on the consultant's side is to create an invoice for the project. Usually, project plans are created where resources start low, ramp up, and finally ramp down close to launch, so as to minimize the overall cost of the project. This is fine as long as the project length is fixed and you know roughly when to on-board or roll-off a resource. However, with critical chain, the project timeline is a range and is not fixed. So, you'd have to create a range in your invoice potentially with an 'early completion' bonus or something similar, because you won't be sure exactly when to on-board or roll-off resources. This becomes messy when some high-value resources are in demand on other projects that don't use Critical Chain.

So, the bottom line is that Critical Chain is a great methodology and one that works really well. However, when brought into a software project, the three constraints - 1) consultants 2) offshore team, and 3) training (for both employees and consultants) must be considered before implementing it. Otherwise, you'll have a lot of headaches.

Tuesday, June 09, 2009

A week into using Ubuntu Desktop

As I had mentioned in my earlier blog, I switched to Ubuntu a few days back and thought I'd give a quick follow-up.
So far, the server seems to be working just fine. However, some 'not-so-great' user experiences are slowly showing up. Here's a quick list.
  1. Remote Desktop: Most times, I connect to my server from my laptop. With Windows, this was done using Terminal Services, which gave a fairly seamless experience. Ubuntu has an in-built remote server that can be enabled just as easily as Windows, but I find that the remote experience is far from optimal. I have tried three programs now - Real VNC, Ultra VNC, and Tight VNC - and all of them give a choppy experience, especially at full resolution. Tight VNC is by far the best and that's not saying much. Given that I am connecting within my local network, this is very surprising.
  2. Fonts: The desktop fonts are awesome, much crisper and better to read than Windows. However, the fonts within programs, especially Firefox, seems to be a little off. The anti-aliasing doesn't seem to be working properly.
  3. Mouse scrolling: The scrolling within Firefox also seems to be a bit choppy and not as smooth as Windows.
  4. FTP Server: This was surprising for me. Being a UNIX system, I was expecting great support off the bat for an FTP server. True, Ubuntu comes with a number of FTP server options, but none of them were exactly user-friendly, especially for a newbie like me. Compared to this, FileZilla server in Windows was a breeze to work with. I started with the basic FTPd, switched over to Pure FTP, then to Pro FTP, and finally landed up with vsftpd. Pro FTP is recommended in many forums. It even has a nice GUI, but for some reason, it would not list directories once connected I am assuming some permission issue. Anyway, I found vsftpd to be more easy to configure and was finally up and running with it after 30 minutes of configuration.
  5. Mail Server: This was another surprise. I was hoping that there'll be a cool mail server built right into the system, but it was not (at least not for a newbie again). The most popular one seems to be Postfix. There's a great how-to for Postfix (combined with 4 other packages for firewall, antivirus, and spamming) from someone in the net. I tried to religiously go through the post, but gave up after the 3rd package. It was just a little too complicated to setup a mail server that was optional for me anyway. I wish there was a server with a more easy to use GUI.
That said, I should also say that I am quite happy with the installation so far. Compared to my earlier attempts where I was forced to uninstall Ubuntu (or Fedora) a couple of days later because of some major function not working, I was able to get almost all my needs taken care of right away. I was even able to get my Scanner (Brother MFC3360C) to work with Ubuntu quickly. So, I think Ubuntu 9.04 is a great step in the right direction to convince users that UNIX can be user-friendly.

Friday, June 05, 2009

The great switcheroo - from Windows to Ubuntu

I've been meaning to do this for ages now, and finally made a successful switch from Windows Vista to Ubuntu Desktop 9.04 on my main server. The two main factors that contributed to the switch was the increasing frustration with Windows Vista (especially Windows Explorer) and the advancements made in Ubuntu over the last few years. So, here's a quick rundown of my experience.

Background

Before I start, a quick description of my server - it's a home-built Intel Quad Core system with 4 GB RAM and 1TB disk space. I host two public websites (http://www.cssathya.com and http://www.scmadbook.com), an FTP server, and a Subversion repository. Other than that, it has the standard set of applications.
I myself am in IT and know my way (I built my system). I have a good knowledge of various flavors of Unix (enough to get my programming and deployment done and logs monitored), however never bothered to use it as a primary system. So, my experience can be considered as that of a tech-savvy beginner to Ubuntu.

Installation
It's not exactly a committed switch because I decided to do a dual-boot, with Ubuntu on a completely different hard disk. To me, this is the least risky option, as you can set the boot order in the BIOS fairly easily and it keeps everything separate.
I setup three partitions, 80GB for root, 40GB for /home and another 8GB for swap space. I tried to create another FAT32 partition for the remaining space, but the partition software that was in the installer had some issues so left it unallocated. After that, the installation went smoothly - in all, around 30 minutes tops. Very impressive.
All my devices worked perfectly - sound, video, etc. - a great improvement from past versions. In fact, one good thing with Ubuntu is the LiveCD option where you can pop in the CD and run Ubuntu right from the CD without making any changes to your system to check if all drivers will work perfectly.
The other great improvement is that the latest version recognizes all the Windows drives right away. No need to run commands to mount. This used to be a deal-breaker for me earlier. You can read and write to your Windows disks without any issues.

Applications

Ubuntu, like any other Linux system, has a software package manager (called Synaptics) and you can get a wide range of software just by a few clicks. In fact, it has so many that it'll probably take me another week to go through and pick the ones I need. But I'll be quick to point out that most obvious software (browser, word processor, etc.) are available right away.
That is, with one exception - an MP3 Player. Yes. Ubuntu does not come with an MP3 Player out of the box, because the MP3 codec is proprietary. However, this is fairly easy to fix. I did a little bit of Googling and found that my favorite player in Windows - VLC - can be obtained using a single line (apt-get install vlc).
I also did some more search and got a few more, such as VNC for remote desktop connection to my server, and Amarok, a supposedly better music player.

Server Software

Next is the most important part for me - migrating the servers. I have three software powering the two sites - TikiWiki, Confluence, and MediaWiki, all run using Apache HTTP Server and Apache Tomcat.
Setting up Apache was a little tricky, mainly because Ubuntu splits the httpd.conf file into multiple files. However, a little bit of Googling and I was able to set it up fairly quickly.
Setting up MySQL was easier. A few clicks on the Synaptic and I was done, including the GUI tools. After I restored the database for Tikiwiki and Mediawiki, both of them ran without any issues.
The one that took the most time was Confluence. This was also mainly not because of Confluence but rather because of Tomcat in Ubuntu. In Ubuntu, Tomcat is set to run under high security by default and was preventing Confluence from doing most of its operations. After some more Googling, I was able to disable the heightened security (TOMCAT6_SECURITY = no in the policy file) and Confluence started up fine.
The other final glitch I faced was with the mod_jk connector that connects the two servers. By default, Ubuntu ignores the httpd.conf file and uses apache2.conf file and the sites-enabled and modules-enabled directories. However, for mod_jk to work, you need to place the directives in the httpd.conf file instead of jk.conf or jk.load files. This will probably sound Greek if you are not familiar with any of these terms, but I wanted to mention it here since this step took me around 2 hours to figure out.
Finally, I was able to get all my servers up and running. The whole process took probably 8 - 10 hours to get everything working.

Final impressions

At this point, I have my servers back up and running and all the basic software I need. The only problem I have right now is that my printer, a Brother MFC printer, does not have a 64-bit Ubuntu driver (although a 32-bit is available). I need to figure out how to get one or at least a generic driver to get my printer to work. This can potentially make me go at least partially to Windows, which I really don't want to do at this point (alternately, I guess I can get a supported printer - maybe after the ink is done!)
Update: After a little more digging, I found out an easy way to force Ubuntu 64-bit to use the 32-bit drivers - and the suggestions were from the Brother website. Tried it out, and it works perfectly. So, no real reason to switch back to Windows now!

So, at the end of the day, I think it's worth checking out Ubuntu, especially version 9.04 if you are curious. It's very stable and is extremely friendly. There are tons of documentation available in the net, which is a great thing. I was able to get over most of my issues fairly quickly thanks mainly to the community support.

PS: For those wondering about the title, you should read Roald Dahl's story of the same name (The Great Switcheroo) - a little adultish in nature, but a very interesting read.

Friday, May 29, 2009

Software (and life) lessons we can learn from Pixar

Watched Pixar's latest film "UP" yesterday. Over the last years, Pixar has become the undisputed leader in making cartoon movies (or what we now more sophisticatedly call as animated movies). Their claim to fame is that they haven't produced a single bad film till now, which is pretty amazing in the movie industry. When you look a little deep into how they've managed to pull this off, some consistent ideals surface, which I think is very apt for the software industry, presentations, and to an extent, to our daily activities.

Pace yourself: Pixar releases films once 1-2 years, fairly lengthy compared to other movie studios (while somewhat comparable to Dreamworks). Pixar seems to focus on the long-term gains than short-term ones, something that almost no one does in any industry. Their conviction that a great product, even if released after a long period, can produce greater returns, has paid off time and again.
Contrarily, software giants seem to focus on the short-term gains, releasing version after version in 6-month or sometime even 3-month gaps, little realizing that people want stable products than frequent releases.
I guess we can apply the same in life - no, not getting babies every 2 years - but to essentially 'take time and smell the roses'. I remember when I was on a trek at Kilimanjaro, the guide constantly used to tell us 'pole pole' (e pronounced as ey) meaning 'slowly slowly', meaning don't walk too fast or you're going to get exhausted fast. In most cases, we don't realize this until its too late.
Having stable, timed releases has enabled Pixar to make more money with less releases and increase credibility with its viewers.

Have a Story: Pixar doesn't produce cartoons - they make stories. More than the animation wizardry, what makes the movie stick is the story that it conveys. And in almost all cases, the story is very simple, is unexpected (or at least has opposing characters), and is emotional. Without the story, a movie falls flat (something that most Indian filmmakers can learn from!). When you watch a movie, you are so engrossed in the story that you barely notice the animation - and I think that's essential to the success of Pixar.
In software, this translates into business functions. If the software is not functional, any amount of technical wizardry is not going to help an application to succeed.

Be detail-oriented, but don't show it: I heard that in their latest film Up, Pixar implemented a special algorithm that makes the 10,000 balloons that powers the house move as if they would in real life (bumping, moving, etc.). While the viewer may never even notice this nuance, having these details taken care of, somehow completes the picture more. The key here is that even if the effort that went in to do this was tremendous, they did not make that feature prominent, primarily because it's supposed to be in the background, as a prop to the story.
Similarly, in a software, one needs to take care of all the nuances (error messages, logging, connection retries, etc.) so that it in essence, is not visible to the user but is there doing its job.
Contextual user interfaces effectively achieve this - by giving you only the actions you need based on the context and not the entire set.

Don't succumb to too much technology: This is probably a negative lesson we can possibly learn from "Up". It looks like Pixar seems to have focused a little too much on the 3D than the story by keeping a lot of action sequences, which I feel diluted the film's message. It's always tempting to use the latest and greatest technology whether it's needed or not. At the end, users are going to be happy not because it has AJAX, but because it meets all their needs, in a friendly way.

I am sure if I dig deeper, there will be a few more prominent lessons, but these are the ones I consider to be the most imprtant. In any case, this is a blog and not an essay - so I'll stop here!

Sunday, May 24, 2009

Plone ecosystem, terms, and definitions

Plone is one of the few fairly robust open source Web Content Management Systems out there, along with the likes of Drupal, Joomla, and Alfresco. While most WCMs are typically built on PHP or Java, Plone is based on Python. While this maybe one of the reasons why it is mentioned relatively less in the market (PHP is by far, more dominant in web-based applications, compared even to Java, I'd say), the features are quite impressive and on par with the other systems.

Here's a quick concept map that can help you become familiar with the terms and definitions around the Plone ecosystem, and trust me, there are quite a few because it has its own application server, database, and search engine! So far, the popularity seems to be more with educational institutions (although a few commercial case studies are mentioned in the Plone site).

Plone Ecosystem Terms and Definitions

I will be attending the Plone Symposium that's being held from May 26 - 31 shortly, and hope to blog more once I get back.

Saturday, May 16, 2009

Hunt for the next tablet

I think the time is getting ripe for a consolidation, with rumors of an iPad from Apple. There have been a lot of super-similar products in the market now around the concept of the old Tablet PC.
  • Tablet PC - The age-old giant which has still not caught the fancy and for some reason hasn't gone down in price as well
  • Touch-sensitive phones - iPhone and Blackberry Storm are leading the pack with the way we interact with touch screens
  • Netbook - The recent entrant in the notebook market, which for some reason that I cannot imagine, does not have a foldable display OR a touch-sensitive screen
  • E-book Reader - An anomaly in evolution, Kindle and Sony are good for one thing - reading books, and that too in black and white.
It doesn't really take too much smartitude to find that all these products revolve around one uber-product - an A4-sized tablet PC with a multi-touch LED-based screen with nice, readable fonts.

Bigger question is, why are other companies waiting for Apple to innovate and take the market in this area and not beating it to the punch themselves?

Thursday, May 14, 2009

Mapping your mind - or at least concepts

I was first introduced to the idea of mind maps when I saw the Java Concept Map. While it itself is not a mind map in the strictest sense, the idea is the same and made me explore this new way of capturing information.
Since then, I have heard rants and raves about mind maps - how crazy minded lunatics try to apply mind maps to everything on one hand and how procedure-oriented people write pages of documentation without a simpler visual representation. I think mind maps can be used and abused in a way similar to UML diagrams. While use case diagrams, class diagrams, and state charts can certainly be useful to represent the flow of a program nicely, over-use of the diagrams (such as strictest implementation of Model-Driven-Architecture or using Rational Architect) almost always ends in a disaster. I should know, I've lived through a few of them!
Back to the concept map, I love the use of typography there - big fonts for important concepts and decreasing font sizes for less important topics - kind of like tag clouds. Simple but effective.
Interestingly, I haven't seen any mind mapping software, including the most popular MindManager from MindJet, or the other free open source versions, such as XMind or FreeMind, provide this type of functionality out of the box. 
While no doubt these folks have done way more research in this area than I, my feeling is that the following two features that I have seen missing so far would be very useful.
  • Each topic should optionally have a definition that will be displayed in a smaller font right next to the topic. Having it as a tooltip is not good enough as it will not be visible in the diagram.
  • The connecting line should be 'describable'. In other words, I should be able to explain why two topics are related. This helps to read the relations in a meaningful sentence.
The closest that comes to achieving this is a nice piece of free software called Cmap Tools. The only downside I have seen in this one is that it by default is intended for a more collaborative environment than individual use.
Maybe it's time for me to reinvent the wheel! After all, that's how most open source projects start :)