You are not faster with an 80% solution

When the schedule is tight for a software development project, one of the first things non-technical people contemplate, is reducing quality and/or scope and go for an 80% solution. This can work but only under particular circumstances.

A recent discussion I had with some folks was about such a situation. A very tight deadline had been issued by senior management and multiple departments had to collaborate on a solution that spawned multiple parts of the organizations and their core IT systems. Those systems (think about something like ERP and CRM) have a multitude of connections, grown over many years without governance, but not for the data that were required now.

The people were all quite supportive and a constructive discussion took off. Not surprisingly, finding a common language took some time. Especially so because only after a while did we realize that in several cases the same term had a different meaning. In the end there was a particular detail where no obvious solution was available.

I was responsible for the technical implementation on one of the two systems involved. My counterpart on the business side and I discussed various options and and eventually he came up with the idea to just safe time by going for the 80% solution. We talked about a number of aspects and it soon became clear to me that this would actually require more time than to just “do it properly”.

This sounds, at the very least, counter-intuitive for most people. So let me illustrate my point with a non-technical equivalent. Many years ago I had to prepare for my then-boss a list with press clippings every day. It basically meant to go through a long list, that was created automatically, and remove everything he was not interested in. This was quite time-consuming and he was genuinely surprised when he found out.

His statement then was “Why do you need so much time, it is only 20 of the 200 articles that are relevant”. That was of course correct. But what he had not thought about was that I had to go through the list of all 200 articles and check some in more detail than just looking at the headline. So it did not really make a difference whether I cut the list down to 100 or 60 or 20 articles, the time required stayed more or less the same. He agreed on the point and quite soon decided there was better use for my time.

It is exactly the same with taking shortcuts during a technical implementation. The easy part is to decide that you want to cut 20% off the next release. But things get tricky when it comes to deciding which 20% to skip without any undesired side-effects. In order to be able to do that, you need to understand in depth what the consequences would be. So quite often you spend more time trying to understand potential ramifications for various scenarios, than it would take to simply do it properly right from the start.

And that is only the beginning of the problem. You need to document the results of your analysis really carefully, if you want to be sure that nothing is missed when the first code change is required. Even a seemingly minor bug fix has an increased potential to break something if the current overall solution is built on a set of assumptions. And adding enhancements or new features will be even more “fun”. Because people have to read through all the documentation and also understand it. Together this is a time-consuming task with considerable risk to still overlook something (or hit an edge-case that was not documented).

But in reality this documentation does not exist anyway because you wanted to save time in the first place. And detailed documentation is not exactly supportive here. So people throw in bits and pieces and you end up with a document that is typically even worse than most project documentation that is out there. So whoever has the pleasure to work on the code without having all the details in mind is really screwed. That includes you, by the way, because you will have forgotten most after only a few weeks.

The solution is usually a re-implementation because it is faster than to fix the broken code. Of course that re-implementation often needs additional logic to handle data migration for transactions that were completed with the 80% solution and now ruin your data quality. But hang on, that might just be 20% effort you can safe right now ….

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.