In Defense of the Legacy Application

It was an exciting time. At the small web development firm I worked at, we got the go ahead to rewrite from scratch one of our client’s websites. The task was to replace a Content Management System written in Cold Fusion that we inherited, with a new application of our own creation. Of course with Cold Fusion being *so* 1990s we aimed to use Java, Struts, Hibernate, and JSPs. It was one of those precious moments of my career when you were running with an open field in front of you and could really create something special soup-to-nuts. This new application would be both powerful and simple; elegant and flexible; able to toast bread and brew coffee.

Three months later when I stood back to see what we had done, I saw a complicated jumble of new technologies splattered with just as many ugly compromises and hidden pratfalls as the original.

It’s the easiest thing in the world to look at a legacy application, especially one that you did not create, and argue for it to be replaced. Most often, developers do this with a bit of attitude for good measure. (“Who would want to run that piece of crap technology?”) The egotism is interesting in these situations. At any given point in time most developers are going to have strong preferences about how to develop an application. Sometimes, they can be outright ideological, with respect to the technology. So it is only natural that when presented with a legacy application that they didn’t write, they’ll demand a replacement written the “right” way.

Problem is, there is one thing the legacy application does that is often overlooked: it works. Granted, many legacy applications don’t work very well, and some don’t work at all in some areas. But if it is actually being used, it is working in some capacity and providing some benefit.

Part of what makes it work is the effort it took to develop features and fix bugs that are now taken for granted. Any application goes through a maturation process once users actually begin to use it. There is a huge gulf, most times, between the way an application works when first developed and the way the users actually need it or want it to work. No amount of up-front analysis can anticipate it fully. Once in the real world, bugs happen and adjustments have to be made. The amazing thing is how easy it is to forget all this effort, or even to count it against the application under the assumption that bugs won’t exist with a replacement. The exact same bugs may not exist, but other ones certainly will. The new application will have to start over with the process of maturing and stabilizing. What makes this worse is users may have bad memories of the bug and it will contribute to giving the legacy application a bad reputation in their minds. Developers, too, will have bad memories of having to fix the bugs, especially if they inherited the application. Yet, a bug that has been found and subsequently fixed should count in *favor* of the application because it contributes to its maturity, not against it.

Another psychological factor at play is the siren call of new technologies, and the assumption that they will make life far easier than they really will. New technologies are often over-hyped and even if they are not, many folks think they will help dramatically. Improvements in technology are sometimes important, and can lead to some critical efficiencies. But in the world of software development, they are almost always only marginal improvements. Granted, over time they can add up. But to replace a reasonably modern legacy application with a brand new technology is often not worth it. Perl was one of the first languages used in Web applications. I can remember using it to do all sorts of crazy text parsing during one of my first professional projects, which was to convert a slew of data about books from my company’s proprietary format, to HTML. Looking back at that code, on the one hand, it is a bit of a mess, as many procedural programs become. But despite this, strangely, I am impressed by how easy it is simply to follow what happens from start to finish.

For developers, there should be a golden rule about replacing legacy applications: you aren’t allowed to say it should be replaced until you understand it. For one thing, you’re going to have to understand all its functions anyway, if you hope to replace them. Second, in going through the process of reviewing the legacy code, you are probably going to be surprised by how much it does. Your impressions of the application are almost certainly more simplified than the reality of it. And your estimates about how hard it will be to replace are almost certainly too low.

I’m making this argument for legacy applications with the full understanding that it’s fruitless. This is far too emotional an issue most times. Humans being what they are, the urge to throw something away and start from scratch is too strong. And the one thing that you can say about replacing legacy application is, the developers who write the replacement are going to own it and be invested in it. And that’s certainly important. At least until they leave and the next developers come along. Give the new folks a few months. Just wait. Sooner or later, they’ll start arguing for the replacement to be replaced.

About David Tobey

I'm a web developer and consultant based in lovely Schenectady, NY, where I run SoftSlate, LLC. In my twenties I worked in book publishing, where I met my wife. In my thirties I switched careers and became a computer programmer. I am still in the brainstorming phase for what to do in my forties, and I am quickly running out of time to make a decision. If you have any suggestions, please let me know. UPDATE: I have run out of time and received no (realistic) suggestions. I guess it's programming for another decade.
This entry was posted in Deep Thoughts, Programming. Bookmark the permalink.

Leave a Reply