Tag Archives: ALM

Starting with Continuous Integration

In this post I look at things to consider when an organization wants to introduce Continuous Integration (CI). As in so many other situations the non-technical challenges are more difficult to solve than some nitty-gritty details.

Start Small Right Now

If ever there was a place for the proverb “the better is the enemy of the good” it is here. Waiting days, weeks, or months because you have not sorted out all details is the worst you can do. Instead you should start immediately by just installing a CI server (Jenkins is the de facto standard) and set up a simple job that does nothing but check out the source code from the VCS and compile it.

More advanced stuff like test automation, setting up delivery pipelines, integration with binary repositories like Artifactory or Nexus is not needed in the beginning.

Agile Automatically?

Most development teams that have not used CI so far are probably operating in a more or less non-agile fashion. That is fine and can stay as it is! Because while CI is virtually a prerequisite for agile development, that does absolutely not mean that teams following a waterfall model will not benefit considerably from CI.

So establishing CI can but does not have to be the first step of moving towards agile development. In fact I would argue that introducing CI is a large-enough step for an existing development organization. Only when this has been “digested”, you should think about moving towards agile. Otherwise too many things would be changed in parallel, similar to combining a new release of your own software with an upgrade of the underlying platform, e.g. the database server.

Frequency of Builds

This is the only part where I strongly recommend that you start at full throttle. What I mean by that is that you resist the temptation to run your builds only once a day or even less frequently. Ideally, every commit into the VCS triggers a build via a post-commit hook (here is more information for Git and Subversion). But polling the VCS every e.g. 10 minutes is a good-enough approximation in most cases. And it is also a little bit easier to set up when you just start on the whole topic.

Why am I so adamant on this particular point? I think that almost-instant feedback is at the very core of CI and the only way to deliver it is by running the build. All the points below change the amount of details that are provided or reduce the risk of introducing bugs into the code. But this hugely powerful feeling you get after your first commit triggers a build, is the important aspect for successful adoption in my view.

Test Automation

Start with “compilation works” as the lowest common denominator. When you want to start adding the use of “proper” test frameworks, feel free to do so. But is nothing you need on day one.

When you are ready to do more, you need to focus on those parts of your code that are most relevant for the business. Resist the temptation of striving for large test coverage of your code for the sake of it (having a KPI on this is a really bad idea). Otherwise people will start writing test for trivial helper functions, testing which on their own is of low relevance.

Instead take the critical parts of the business logic and develop a way to test them end-to-end (if possible without the GUI yet). With this approach you will implicitly cover all the lower-level stuff underneath automatically. Unless you have someone on your team with practical experience on integration testing frameworks (e.g. Citrus), I would not start with a full-blown approach but rather develop a few custom scripts.

The point in time when to start with more advanced topics, especially automated performance tests, depends on your individual situation and I will not make recommendations about it here. But what you should do as soon as possible, is read up on the subject and get an understanding about the different types of test and what they are good for. You do not need to implement everything now, but this will allow you to make informed judgements about the path you choose.

In Closing

You should now have an idea how to get started with CI quickly and in a way that delivers positive results pretty much from day one. Gaining traction in the organization should be your first priority in the beginning. There is a widespread misconception that things like CI, while theoretically the right to do, slow developers down. Nothing could be further from the truth. But unless you fight this impression fiercely, sooner or later management will ask for by-passing that “nice new thing” and get code out of the code faster using the old way.

Related posts:

How to Implement Test-Driven Development

Test-Driven Development (TDD) is something I have long had difficulties with. Not because I consider it a bad concept, but found it very difficult to start doing. In hindsight it appears that the advice given in the respective books and online articles was not suitable. So here is the approach that finally worked for me.

It boils down to deviating from the pure doctrine. Instead of writing a test before starting on a new piece of code, I start with the actual code right away. Yes, that violates the core principle, although only for a while. But I have found that in most cases my understanding of the problem is still somewhat vague when I start working on it. So for my brain it is better if I do not have to split its capacity between solving the actual problem and thinking about how to devise a proper test and what all that means for the structure of the future code.

Once the initial version of the working code is there and manually validated, I do add the test. From then on I am in a position to refactor the code without the risk of breaking something. And of course this refactoring is needed because the first version of any code is never really good. While you could write “better” initial code, this would require spending more time upfront than you otherwise need for refactoring later. And it also ignores the fact that you only really understand the problem, when you have finished implementing the solution.

What I later realized was that my approach also helped me to write more testable code. But instead of consciously having to work on it, this sneaked in as a by-product of my modified way of doing TDD. For me this is a more natural way of learning and the results are typically better than following some formal approach.

Start Working with a Version Control System

Every so often I get asked about what to consider when introducing Continuous Integration (CI) to an organization. Interestingly though, most of the details discussed are about working with a version control system (VCS) and not CI itself. That is understandable because the VCS is the “gateway” for all developers. So here are my recommendations.

Use of Branches

It is important to distinguish between the goal (Continuous Integration) and the means (trunk-based development). Yes, it is possible to implement a system that facilitates frequent integration of code from various branches. On the other hand it is a considerably more complex approach than to simply work off trunk. So in most cases I would argue that simpler is better.

In any case I recommend to also look at using branches and can recommend this video on YouTube as a starting point. Whatever path you choose, it will always improve your understanding of the subject and you do not have to take my word for it.

Number of Commits

Most people that do not use a VCS will typically work through the day and create a file copy (snapshot-like) of their project in the evening just before they leave for the day. So it is a natural conclusion to transfer this approach like-for-like to the VCS. In practical terms this would mean to perform a single commit every day just before you go home. And the commit message would be similar to “Work for <DATE>” or “WIP”.

But instead of doing so, developers should commit as often as possible. In my experience 5 to 15 times for a full day of development work is a good rule-of-thumb. There will be exceptions, of course. But whenever you are far enough outside this ballpark-figure, you should analyze why that is.

Time to Commit

Instead of looking at time intervals, people should commit whenever the code has reached a stable state. Or in other words: It does not make sense to have people commit every 30 to 45 minutes. They should rather do this after e.g. having fixed a small bug (e.g. correction of a threshold). But for changes that require more than roughly 60 minutes of work, things need to be broken down. This will be looked at in detail in the next bullet point.

Especially when starting with a VCS, people will quite often miss to commit when they have completed a somewhat discrete piece of work. That is normal and happens to everybody. Even today, with more than ten years of experience on the subject, I still sometimes miss the point. Adding the step of committing a set of changes to your work routine, is something that really takes time. It is a bit like re-ordering your morning routine in the bathroom. Most people do things in the exact same order every day. Changing something there is just as difficult as performing a commit “automatically”.

What to do when you realize your miss, depends on the circumstances. If this is your personal pet project, you may just virtually slap yourself on the head and continue or do the infamous “WIP” commit. But if this a critical project for you organization and you collaborate with others, you need to undo the last couple of changes until you are back where you should have performed the commit in the first place. Yes, this is cumbersome and feels like a waste of time, especially if you are working under time pressure, i.e. always.

But there is no alternative and anyone who says differently (typically project managers without a solid background in software development) is just completely wrong. Because you need to be able to understand exactly who performed what change to the code base and when. But with messy commits this will not work in practice. Or to rephrase in management speak: It is much more time-consuming and error-prone to go through untidy changes every single time you try find something in the VCS, than to spend the effort only once and correct things. 

Split Up Larger Work Items

In many cases the effort to implement a new feature or fix a really nasty bug will exceed let’s say 60 minutes. In those cases the developer should have a rough a plan how the overall work be structured. For a new feature this could mean something like:

  1. Add test-cases that pass for the current implementation
  2. Re-factor in preparation without changing behavior
  3. Add test-cases for new feature
  4. Implement first half of new feature but ensure that it cannot be executed yet (think feature-toggle here)
  5. Finish new feature and enable execution

Working Code

The example above for how to structure the implementation of something larger has a critical aspect to it. Which is that at every point in time the code in the VCS must be in a consistent and operational (=deployable) state. If things look different (i.e. some parts are not working every now and then) in your development environment, as opposed to the VCS, that is ok. Although it has proven to make life easier when both the VCS and your environment do not stray too far apart from each other.

What I discovered for myself is that the approach has a really nice by-product: cleaner and more stable code. In hindsight I cannot say when this materialized for me. So there is a small chance that from a clean code perspective things got worse before they got better. But my gut feeling tells me that this was not the case. Because an always-working code also means a better structured code, which is by definition more stable due to reduced complexity (relative to a messy codebase).

Fix Immediately

This has been written about many times and I merely mention it for completeness here. Whenever a change breaks the code, and thus causes automated tests to fail, the highest priority is to get things back into a working state. No exceptions ever!

When NOT to Commit

A VCS is not a backup system for your code but a VCS. This also means that you should not simply commit at the end of the day before you go home, unless your code happens to be in a working state. Otherwise, if you feel the need or are obliged to do so, have a backup location and/or script that handles this. But please do not clutter the VCS with backups.

At least in the early days of CI (the early 2000s) it was a somewhat common phenomenon at the beginning of projects that at the end of the day people checked in whatever they had done so far and went home. In many cases this broke the code and tests failed on the CI server. Until the next morning it was not possible for others to work effectively because you cannot reasonably integrate further changes with an already broken codebase. That is bad enough if people are located in one timezone. But think about the effect it has on an organization that works with a follow-the-sun approach.

Commit Messages

The reason for commit messages, in addition to the technical details that the VCS records anyway, is to describe the intent of the change. It does not make sense to list technical details, because those can always be retrieved with much more precision from the VCS log. But why you performed the sum of those changes is usually hard to extract from the technical delta. So think about how you would describe the change in a way that allows you to understand things when you look at them in six months.

In Closing

These are just a few point I learned over the years and have been able to validate with various projects. They are practical and provide, in my view, a good balance between the ideal world and the reality you find in many larger organizations. Please let know if you agree or (more importantly!) disagree.

Related posts:

Structuring a VCS Repository

My main programming hobby project will soon celebrate its tenth birthday, so I thought a few notes on how I structure my VCS repository might be of interest. The VCS I have been using since the beginning is Subversion. (When I started, Git had already been released, but was really not that popular yet.) So while some details of this article will be specific to Subversion, the general concepts should be applicable elsewhere as well.

When it comes to the structure of a repository, Subversion does not impose anything from a technical point of view. All it sees is a kind-of file system, with all the pros and cons that come with the simplicity of this approach. It makes it easy for people to start using it, which is really good. But it also does not offer help for more advanced use-cases, so that people need to find a way how to map certain requirements onto that file system concept.

As a result, a convention has emerged and been there for many years now. It says that at the top-level of the project there should be only the following folders:

  • trunk: Home of the latest version (sometimes called the HEAD revision)
  • branches: Development sidelines where work happens in isolation from trunk
  • tags: Snapshots that give meaningful names to a certain revision

You will find plenty of additional information on the subject when searching the Internet. I can also recommend the book “Pragmatic Version Control: Using Subversion“, although it seems to be out of print now.

With these general points out of the way, let me start with how I work on my project. There are only a few core rules and despite their simplicity I can handle all situations.

  • The most important aspect for the structure of my SVN repository is that all active development on the coming version happens at trunk. See this article for all the important details, why you really want to follow that approach in almost all cases.
  • Once a new version is about to be released, I need a place where bug-fixes can be developed. So I create a release branch (e.g. /branches/releases/v1.3) with major and minor version number but not the patch version (I use semantic versioning). From this release branch I then cut the release (v1.3.0 in this case) by pointing the release job of my CI server to the release branch.
  • Once the release is done, I create a tag that also includes the patch version. In this example the tag will be from /branches/releases/v1.3 to /tags/releases/v1.3.0 .
  • Now I return to working on the next release (v1.4) by switching back to trunk.
  • Bug fixing happens primarily on trunk with fixes being back-ported to released versions. There are cases when this is not practical, of course. The two main reasons are that significant structural changes were done on trunk (you do refactor, don’t you?) or another change has implicitly removed the bug there already. But that is the exception.
  • When a bug-fix is needed on a released version, I temporarily switch my working copy to the release branch and do the respective work there. Unless the bug is critical I do not release a new version immediately after that, though. So this may repeat a few times, before the maintenance release.
  • The maintenance release is then cut, again, from /branches/releases/v1.3 . And after that a new tag is created to /tags/releases/v1.3.1 .

Those rules have proven to be working perfectly and I hope they will continue to do so for the next ten years. I have been quite lucky in that, although for me this is still a hobby project, the result is used by many global companies and organizations in a business-critical context. There are at least 11.000 installations in production that I am aware about, so I cannot be casual about reliability of the delivery process.

 

Open Services for Lifecycle Collaboration

I can honestly say that I when I wrote my post about ALM and middleware, I hadn’t heard about the Open Services for Lifecycle Collaboration initiative. But it is exactly the kind of thing I had in mind. These guys are working on the definition of a minimum (but expandable) set of features and functions that allow easy integration between the various tools, which can usually be found in an organization. To my knowledge no products exist yet, but I really like the idea and approach.

Version Control Systems and other Repositories

Recently, a few colleagues and I had a very interesting discussion about what should go into a Version Control System (VCS) and what should not. In particular we were arguing as to whether things like documents or project plans should go in. Here are a few things that I came up with in that context.

I guess the usage of VCS (and other repositories) somehow comes down to a few general desires (aka use-cases):

  • Single source of truth
  • History/time machine
  • Traceability
  • Collaboration
  • Automation of builds etc.

In today’s world with its many different repositories you can either go for a mix (best-of-breed) or the lowest common denominator which is usually the VCS. So what’s stopping people from doing it properly (=best of breed)?

  • Lack of conceptual understanding:
    • Most people involved in those kinds of discussion usually come from a (Java) development background. So there is a “natural” tendency to think VCS. What this leaves out is that other repositories, which are often DB-based, offer additional capabilities. In particular there are all sorts of cross checks and other constraints which are being enforced. Also, given their underlying architecture, they are usually easier to integrate with in therms of process-driven approaches.
    • Non-technical folks are mostly used to do versioning-by-filename and require education to see the need for more.
  • Lack of repository integration: Interdependent artefacts spread over multiple repositories require interaction, esp. synchronisation. Unless some kind of standard has emerged, it is a tedious task to do custom development for these kinds of interfaces. Interestingly, this goes back to my post about ALM needing middleware.
  • Different repositories have clients working fundamentally differently, both in terms of UI and underlying workflow (the latter is less obvious but far-reaching in consequence). Trying to understand all this is really hard. BTW: This already starts with different VCS! As an example just compare SVN, TFS and Git (complexity increasing in that order, too) and have “fun”.
  • Lack of process: Multiple repositories asking for interaction between themselves also means that there is, at least implicitly, a process behind all this. Admittedly, there is also a process behind a VCS-only approach, but it’s less obvious and its evolvement often ad-hoc in nature. With multiple repositories a more coordinated approach is required to the process development, also because often this means crossing organisational boundaries.

Overall, this means that there is considerable work to be done in this area. I will continue to post my ideas here and look forward to your comments!

ALM and ERP Software: Middleware needed for both

The idea to write this post was triggered when I read an article called “Choosing Agile-true application lifecycle management (ALM)” and in particular by it saying that many ALM tools come as a package that covers multiple processes in the lifecycle of an application. Although strictly speaking this is not a “we-cover-everything” approach, it still strongly reminds me of the take that ERP software has made initially. Its promise, put simply, was that an entire organization with all its activities (to avoid the term process here) could be represented without the need to develop custom software. This was a huge step forward and some companies made and still make a lot of money with it.

Of course, the reality is a bit more complex and so organizations that embrace ERP software have to choose between two options: either change the software to fit the organization, or change the organization to fit the software. This is not meant to be ERP bashing, it simply underlines the point that the one-size-fits-all approach has limits. And a direct consequence of this is that although the implementation effort can be reduced considerably, there is still a lot of work to be done. This is usually referred to as customizing. The more effort needs to go there, the more the ERP software is changing into something individual. So the distinction between a COTS (commercial off-the-shelf) software, the ERP, and something developed individually gets blurred. This can reduce the advantages of ERP, and especially the cost advantage, to an extent.

And another aspect is crucial here, too. An ERP system, pretty much by definition, is a commodity in the sense that the activity it supports is nothing that gives the organization a competitive advantage. In today’s times some of the key influencing factors for the latter are time-to-market and, related to that, agility and flexibility. ERP systems usually have multiple, tightly integrated components and a complex data model to support all this. So every change needs careful analysis so that it doesn’t break something else. No agility, no flexibility, no short time-to-market. And in addition all organizations I have come across so far, need things that their ERP does not provide. So there is always a strong requirement to integrate the ERP world with the rest, be it other systems (incl .mainframe) or trading partners. Middleware vendors have addressed this need for many years.

And now I am finally coming back to my initial point. In my view ALM tools do usually cover one or several aspects of the whole thing but never everything. And if they do, nobody these days starts on a green field. So also here we need to embrace reality and accept that something like ALM middleware is required.

Tooling for Agile and Traditional Development Methodologies

A hot topic of the last few years has been the debate as to whether traditional (aka waterfall-like) methodologies or agile ones (XP, SCRUM, etc.) deliver better results. Much of the discussion that I am aware of has focused on things like

  • Which approach fits the organization?
  • How strategic or tactical (both terms usually go undefined) is the project and how does this affect the suitability of one approach over the other?
  • What legal and compliance requirements must be taken into account?
  • How large and distributed is the development team?

This is all very important stuff and thinking about it is vital. Interestingly, though, what has largely been ignored, at least in the articles I have come across, is the tooling aspect. A methodology without proper tool support has relatively little practical value. Well, of course the tools exist. But can they effectively be used in the project? In my experience this is mostly not the case, when we speak about the “usual suspects” for requirements and test management. The reason for that is simply money. It comes in many incarnations:

  • Few organizations have enterprise licenses for the respective tools and normally no budget is available for buying extra licenses for the project. The reason for the latter is either that this part of the budget was rejected, or that it was forgotten altogether.
  • Even if people are willing to invest for the project, here comes the purchasing process, which in itself can be quite prohibitive.
  • If there are licenses, most of these comprehensive tools have a steep learning curve (no blame meant, this is a complicated subject).
  • No project manager, unless career-wise suicidal, is willing to have his budget pay for people getting to know this software.
  • Even if there was budget (in terms of cash-flow), it takes time and often more than one project to obtain proficiency with the tools.

Let’s be clear, this is not product or methodology bashing. It is simply my personal, 100% subjective experience from many projects.

Now let’s compare this with the situation for Version Control Systems (VCS). Here the situation looks quite different. Products like Subversion (SVN) are well-established and widely used. Their value is not questioned and every non-trivial project uses them. Why are things so different here and since when? (The second part of the question is very important.) VCSes have been around for many years (RCS, CVS and many commercial ones) but none of them really gained the acceptance that SVN has today. I cannot present a scientific study here but my gut feeling is that the following points were crucial for this:

  • Freely available
  • Very simple to use, compared to other VCS. This causes issues for more advanced use-cases, especially merging, but allows for a fast start. And this is certainly better than avoiding a VCS in the first place.
  • Good tool suppport (e.g. TortoiseSVN for Windows)

Many people started using SVN under the covers for the aforementioned reasons and from there it gradually made its way into the official corporate arena. It is now widely accepted as the standard. A similar pattern can be observed for unit-testing (as opposed to full-blown integrating and user acceptance testing):  Many people use JUnit or something comparable with huge success. Or look at Continuous Integration with Hudson. Cruise Control was around quite a bit longer but its configuration was perceived to be cumbersome. And on top of its ease-of-use Hudson added something else: extensibility via plug-ins. The Hudson guys accepted upfront that people would want to do more than what the core product could deliver.

All these tools were designed bottom-up coming from people who knew exactly what they needed. And by “sheer coincidence” much of this stuff is what’s needed for an agile approach. My hypothesis is that more and more of these tools (narrow scope, free, extensible) will be coming and moving up the value chain. A good example is the Framework for Integrated Test that addresses user acceptance tests. As this happens and integration of the various tools at different levels progresses, the different methodologies will also converge.