Tag Archives: Configuration Management

Configuration Management – Part 9: The Audit Trail

Keeping track of  changes is a critical functionality in every configuration management system because there are legal requirements like  SOX (Sarbanes-Oxley Act) that require it. It can be accomplished in several ways. Basically you can either use an existing tool like a VCS (version control system) or have something custom-built.

When possible, I tend to prefer a VCS because it is (hopefully) already part of your process and governance approach. A typical workflow is that the underlying assets (i.e. configuration files) will be changed and then the VCS client be used to commit the change. The commit message allows to record the intent here, which is the critical information.

But there are cases when you need to be able to track things outside the VCS. In all cases I have seen so far the reason was that some information should not be maintained within the VCS for security or operational reasons. While organizations are often relaxed about data like host names in non-PROD environments, this changes abruptly when PROD comes into play. While I always think “security by obscurity” when I have that discussion, it is also a fight not worth having.

The other reason is operational procedures. The operations team often has a well-established approach that maintains configuration files for many applications in a unified way. The latter typically involves a dedicated location on network storage where configuration data sit. Ideally, there should also be a generic mechanism to track changes here. A dedicated VCS is of course a good option, but operations staff without a development background often (rightly) shy away from that route.

So it comes down to what the configuration management system itself offers. What I have implemented in WxConfig is a system where every operation that changes configuration data results in an audit event that gets persisted to disk. It includes metadata (e.g. what user initiated the change from which IP address), the actual change (e.g. file save from UI or change of value via API), and the old and new version of the affected configuration file.

The downside compared to a well-chosen commit message for VCS is that the system cannot record the intent. But on the other hand no change is lost, because no manual activity is needed. In practice this far outweighs the missing intent, at least for me. Also it has proven to be helpful during development when I had accidentally removed data. It was far easier to restore the latter from an audit record compared to looking them up in their original source.

All audit data get persisted to files and the metadata is recorded as XML. That allows automated processing, if required by e.g. a GRC system (Governance, Risk Management, and Compliance) or legal frameworks like the aforementioned Sarbanes-Oxley Act.

Configuration Management – Part 7: The Time

An often overlooked aspect of configuration management is time. There are many scenarios where the correct value is dependent on time (or date for that matter). Typical examples are organizational structures (think “re-org”) or anything that is somehow related to legislation (every change in law or regulation becomes active at a certain time).

There are several approaches to deal with this situation:

  • Deployment: This is the simplest way and its ease comes with a number of drawbacks. What you basically do is make sure that a deployment happens exactly at the time when the changes should take effect. Obviously this collides with a typical CD (Continuous Deployment) setup. And while it is certainly possible to work around this, the remaining limitations usually make it not worthwhile. As to the latter, what if you need to process something with historical data? A good example is tax calculation, when you need to re-process an invoice that was issued before the new VAT rate got active. Next, how do you perform testing? Again, nothing impossible but  a case of having to find ways around it. And so on…
  • Feature toggle: Here the configuration management solution has some kind of scheduler to ensure that at the right point in time the cut-over to the new values is being made. This is conceptually quite similar to the deployment approach, but has the advantage of not colliding with CD. But all the other limitations mentioned above still apply. WxConfig supports this approach.
  • Configuration dimension: The key difference here is that time is explicitly modeled into the configuration itself. That means every query not only has the normal input, usually a key or XPath expression, but also a point in time (if left empty just take the current time as default). This gives full flexibility and eliminates the aforementioned limitations. Unfortunately, however, very few people take this into consideration when they start their project. Adding it later is always possible, but of course it comes with an over-proportional effort compared to doing it right from the start. WxConfig will add support for this soon (interestingly no one asked for it so far).

That was just a quick overview, but I hope it provided some insights. Comments are very welcome (as always).

 

Configuration Management – Part 5: The External World

A standard requirement in configuration management is using values that already exist somewhere else. The most obvious places are environment variables (defined by the operating system) and Java system properties. Other are existing (!) databases, e.g. from an ERP system, or files.

The reason for referencing values directly from their original sources instead of duplicating them in a copy-paste fashion is the Don’t-Repeat-Yourself (DRY) principle. Although the latter is typically discussed in the context of code, it applies to configuration at least as well. We all know those “great” applications, which require manual updates of the same value in different places. And if you miss only one, all hell breaks loose.

For re-use of values within a file, the standard approach is variable interpolation. A well-known syntax for that comes from the build tool Apache Ant, where ${variableName} can be placed anywhere into a value assignment. Apache Commons Configuration, and therefore also WxConfig, support this syntax. In WxConfig you can even reference values from other files that belong to the same Integration Server package.

But what to do for other sources? Well, for the aforementioned environment variables and Java system properties, the respective interpolators from Apache Commons Configuration can also be used in WxConfig. But the latter also defines several interpolators of its own.

  • Cross package: Assuming you have an Integration Server package that holds some general values (often referred to as “global”), those can be referenced. (There are multiple ways to deal with truly global values and the specifics really determine which one is used best.)
  • Current date/time: Gets the current date and/or time, which is admittedly a bit of an edge-case. One could argue that this is not really a configuration value, and that is correct. However, there are scenarios when the ability to quite easily have such a value comes in very handy. Think about files that get created during processing of data. Instead of manually concatenating the path, the base filename, the date/time stamp and the file suffix in the code, you could just have something like this in your configuration file: workerFile=${tmpDir}/appFile_${date:yyyyMMdd-HHmmssSSS}.dat Doesn’t that make things a lot cleaner to read?
  • File content: Similar to date/time this was added primarily for convenience and clarity reasons. The typical use-case are templates, where a file (possibly maintained by another application) is just read directly into a configuration value.
  • Code invocation: When all else fails, you can have an arbitrary piece of logic be executed and the result be placed into the configuration value.

It is worth noting that all interpolators resolve at invocation. So you always get current results. In the case of the file content interpolator, this has the downside of file I/O; but if that really becomes a bottleneck, you are still not worse off than when having the read somewhere else in the code. And the file system caching is also still there …

With this set of options, pretty much all requirements for a redundancy-free configuration management can be met.

Configuration Management – Part 4: The Differences

The capability to deal with multiple environments is another strength of the WxConfig solution. It basically uses a late-binding approach and, upon loading, checks the environment type of the current system (Docker-compatible, by the way). That way, only values valid for the current environment type will be loaded.

Before diving into the details, I want to share a few more conceptional thoughts here. It is worth noting that WxConfig primarily operates on files for storing configuration values, while many projects I have seen use a database for that. A database offers functionality that I would consider “convenience” in the context of configuration management, but it also has some drawbacks (for this use-case).

Primarily, access to files’ content is always easier than to a database. Yes, there is probably no reasonably conceivable scenario that could not be solved with a database. However, it is easier to e.g. view a file in a terminal session, than starting the database command line client and run a select statement. And especially so from a support perspective, when you don’t need additional complexity.

But even more important are things like scripted operations against such files and version control. Why scripting, you may ask. Well, because I see it as a “last line of defense” to achieve a certain behavior. There will always be use-cases out there, that I have not anticipated. And for those it is possible to augment files and achieve what is needed outside of my solution. Also possible with a database – somehow. But do you really prefer that over an Ant script run from your Continuous Integration (CI) job?

And for version control the advantage of files is probably even more obvious. Yes, you can work with something like Liquibase and effectively have a file representation of your database’s structure and contents. But is that so desirable when you could just have plain files?

But, finally, back to different environments and associated aspects. The approach I have chosen is to dynamically load only those files at run-time that are suitable for the current system. So for all config files there is a check whether they are relevant or not. Those that are, will be loaded and the contents of all files added to memory for use by custom code. That is the whole secret.

To rephrase: Once the initialization is finished all the relevant configuration values are available to the application. Their source (i.e. which files were loaded) is completely abstracted away. It is this separation that makes WxConfig so powerful. And in the meantime that basic approach was also applied to other conditions. Because at its core the environment type is just that, a condition.

So we have regular (=unconditional) file loading and conditional loading. Other conditions currently supported are the type of operating system, the version of the run-time container, a timeframe, and also the hostname. And they can be combined with a logical AND. So you can easily develop a solution on Windows that will later run on Linux. All the specifics can be handled by configuration.

When I came up with the idea more than nine years ago, it was a bit like the obvious approach to take. Since then there have been numerous moments when this decision was re-affirmed by the flexibility I got from it.

Configuration Management – Part 3: The Journey Begins

Today I want to tell you about a configuration management solution I have developed. It is built for Software AG‘s ESB (the webMethods Integration Server). The latter did, for a very long time, not have built-in capabilities for dealing with configuration values. So what many projects did, was to hack together a quick-and-dirty “solution” that was basically a wrapper around Java properties.

At the time (early 2008) I was working in a “SWAT” team in presales, where we dealt with large prospects that typically looked for a strategic solution to their integration requirements. Of course configuration management, although not as prevalent as today, came up more and more often. So instead of trying to dodge the bullet, I started working on a solution. And since webMethods Integration Server is built on an open architecture, you can easily add things.

So let’s dive into some of the details of my configuration management solution, called
WxConfig. Its overall design goal was to have packages (roughly the platform’s equivalent to a web application on servlet containers) that could be moved around without any additional manual setup or configuration. The latter is a requirement stemming from the fact that we are in an ESB/integration context. By definition most packages therefore have some “relationship” to the outside world, and are not an application running more or less in isolation. Typical example would be a connection to a database, or an ERP system.

Traditionally there were two approaches for dealing with these external environment dependencies. On development (DEV) environments, there was some kind of document, telling the developer what setup/configuration they would have to apply to their DEV environment. For all “higher” stages like test (TEST) or production (PROD) this was hopefully handled by some kind automation; although back in 2008 that was rarely case and more often than not also there a manual checklist was used.

From a presales perspective (that is where it all started) the DEV part was the relevant one. My idea was to come up with something that could transform the document with the setup instructions into some kind of codification. So I basically devised an XML format (or rather a convention) that did just that. It started out quite small, with only schedulers being supported. (In that specific case we needed a periodic execution of logic for a demo.)

Since then a lot of other things were added. The approach was so successful that  many customers have moved their entire environment setup (well, at least those parts that are supported) to the WxConfig solution. And there are, interestingly enough, quite a few customers who use WxConfig solely for environment configuration but not for configuring their actual applications.

You may now ask: Hang on, what about the other environments, like TEST and PROD? Very good question! This is one of the core strengths of the WxConfig solution and therefore warrants a separate post. Stay tuned …

Configuration Management – Part 2: Splendid Isolation

After the first post on configuration management, today I want to write about what I consider one of the biggest flaws in all systems I am aware about so far. They all live pretty much in “splendid isolation” and ignore whatever surrounds them.

My professional background is integration software and I attribute it to this fact, that for me it is a rather obvious real-world requirement to not ignore what is around you, but connect with it. In the past this was typically applied to applications on the business level. So having the ERP system talk with the CRM and SCM applications is well-established. But of course only your own imagination stops you from doing the same for other software. For an example, please check my post on integration of ALM software.

So what specifically makes integration an interesting aspect of configuration management? There are many different “layers” of configuration management out there. It starts with the infrastructure where you typically find a CMDB (Configuration Management Database), that contains information about computers and other components (esp. network). Then there are tools like Chef, Puppet, Ansible etc. that manage the operating system. (I know that is quite a simplification, because you can obviously handle other aspects that way as well; and I have done that extensively.)

A whole “new” level comes into play with containers like Docker. They come, again, with their own tools to manage configuration and operations. Then there is infrastructure-level software like databases, middleware, application servers, etc.

And last but not least, you finally hit the actual application. This is, by the way, the only, if any, level that the business will be interested in. They do not care about the vast complexity underneath, the interdependencies, etc. But of course they expect us to deliver value fast, without interruptions or failures, and for as little money as possible.

If there is no well-managed link between all those layers (and of course the number of layers needs to be multiplied by the number of environment types), how could you be able to handle things, let alone handle them well?

This is why I firmly believe, you need integration, governance, workflow capabilities etc. around configuration management.

Configuration Management – Part 1: The Horrors

Why is Configuration Management (CM) so important? The simple answer is: Because the correct configuration is one of the three critical parts for running a non-trivial piece of software successfully. (The other two are the actual code and the data.)

I still remember my first project when I was fresh out of university. While the team was great, everything else was a bloody nightmare. In particular the software, for which we developed, did not support configuration management in any way. Staging was basically done by exporting from the development system (DEV), manually finding and changing values in the export (at least it was plain ASCII), and importing it on the test system (TEST), and so on.

As if this wasn’t bad enough, there actually was no TEST environment. The customer had decided to save the money, so we had a combined DEV+TEST system. So more than once we ended up with the production system (PROD) accessing the DEV+TEST database. You may ask why we did not simply write a script to change the various settings automatically. Well, we were not allowed to, because it was not budgeted for.

And it gets even worse, because I actually lied to you when describing the staging process before. The problem was that while the export from one environment worked fine, the import into another did not after some time. It was basically a limitation of the maximum size the import file could could have. Our solution (after consultation with the vendor!) was to shut down both environments (DEV+TEST and PROD) and copy the binary storage files that held the development from one to the other. Then all the relevant values had to be adjusted manually on PROD, and then a test to be performed.

Of course that test failed a couple of times until all the changes had been done and recognized by the system (caching can be a dangerous thing). And because we were interfacing with at least five other systems (all PROD!) such a test run was not so easy. To make things more complicated, every set of test data (to be manually prepared on all surrounding systems) could only be used once. So we made a lot of “friends” with the folks operating the other systems.

We pulled a few all-nighters and somehow got things up and running (there were many other nice things with the software, of course). But it was frustrating and very inefficient. Since then configuration has been something that I deeply care about. And you probably understand why now.