Category Archives: Technical Stuff

Chef Infra Server Moving to Cloud

As part of a blog post about the new v14 of Chef Infra Server, it was announced that from now on existing functionality will be deprecated in favor of the cloud version. It will be interesting to see how this works out. Personally, I have never been a friend of forcing customers off an existing product. It is a dangerous move that bears the risk of customers switching the vendor entirely. Especially so, if it comes with a major architectural shift like from on-premise to cloud.

I have been a happy user of Chef Server for about five years now, although only for a very small number of machines (single digit). The decision for Chef had been made at a time when Ansible was still in its early stages. But with this latest development I will need to move away from Chef. It is pity, because I really like the tool and have done various custom extensions.

Installing ecoDMS 18.09 on Debian 10.5

I had recently installed ecoDMS 18.09 on a Debian 10.5 VM and it was a pleasant experience overall. However, the following things had to be done differently compared to the installation manual

  • Install gnupg via sudo apt-get install gnupg (this seems to be installed out-of-the-box on Ubuntu)
  • Do not install any Java environment but let this be handled by the normal dependency management

The system is currently in light use (still in testing) for my newly founded company and runs quite well. The VM is hosted on ESXi 6 that runs on a Celeron 3900 (yes, two cores) and for a single user with just a few documents stored the performance is really nice.

I so far intend to stay with that system and will keep you updated.

 

Windows 10: Restore Bootloader after Linux Test

I had recently installed Linux in a dual-boot setup on a test machine (an old Lenovo Thinkpad T430). What proved more difficult than in former times was to restore the original state. Most of the recommendations I found online were less than helpful. In particular, many of them ignored the fact that there are two entirely different approaches out there to handle the boot: UEFI and legacy or GPT and MBR respectively. My machine was using MBR (Master Boot Record), given its age.

What finally solved the issue was the following command:

C:\> bootsect /nt60 c: /mbr

I used a USB stick with Windows 10 Installer, but since then learned that you can get to the “repair” console easier, if your Windows 10 still starts. All you need to do is perform the following steps:

  • Log off.
  • When the login screen appears, press a key so that the password field shows up. This will also enable the “power” button in the lower right corner of the screen.
  • Press and hold shift
  • Left-click the power and choose “Restart”
  • Let go of the shift key and the repair menu appears.
  • Go to Troubleshoot > Advanced options > Command Line

“You are not done, when it works”

The title is a quote from Robert C. Martin during a conference talk on clean code. For a long time now (I am getting old) I have followed this approach and it has produced remarkable results. For a bit more than four years I had been responsible for a corporate integration platform. I had built and run it following DevOps principles so that everything was fully automated. No manual maintenance work had been necessary at all, log files were archived, deleted after their retention period etc. This had freed up a lot of time. Time which I used to keep my codebase clean.

Particular focus was put onto the structure. And that had been a really tough job. Much tougher than I anticipated, to be quite frank. But I had paid off. Instead of writing a lot of conventional documentation, the well thought-out structure allowed me to find stuff in a completely intuitive way. Because, believe me, six months after you have written something, you do not remember much about it. But if you need a certain kind of functionality and do not only find the corresponding module immediately, but also from a first look make a correct guess how the parameters are meant to be used, that is truly rewarding.

Having experienced this first hand has greatly influenced my work since then. And I am more convinced then ever, that this is not beautifying for the sake of it. But instead it is a mandatory requirement and a prerequisite for business agility.

Update on LaTeX Setup

This is a quick follow-up to my recent post on the LaTeX setup for 2020. I wanted to let you know that I have recently switched from Emacs to VS Code with the LaTeX Workshop extension as my primary LaTeX editor. I truly cannot remember what made me look into this direction, but I am happy that I did.

The main reason for switching was that the file management is so much easier with VS Code. My current project has a number of files spread over many sub-directories and the way LaTeX Workshop handles things makes me much more productive. I somehow miss AUCTeX, but overall I will certainly not go back.

Unique IDs in Programming

Most people have probably come across what is usually called a UUID (universally unique ID) while using software. UUIDs are typically a cryptic combination of alphanumeric characters and do not make any sense to the human brain.  But why are they such a critical aspect to most computer programs?

Their purpose is pretty obvious: be able to identify a set of data (money transfer, customer, product, order, etc.) on a low technical level. The human brain, for most scenarios, does not need such an artificial construct but works nicely with the underlying “real” data. We identify a customer by looking at first name and surname. And if we have multiple customers “Mike Smith”, we add the date of birth. If that is still not enough, then there is the current address. And so on.

For the purpose of this discussion a customer’s UUIDs is not to be mixed up with the customer number but exists in addition. This may seem like overhead, but think about what happens when an organization buys a competitor. With a bit of “luck” there will be overlap between the customer numbers. Without a UUID already in place, all sorts of ugly workarounds need to be implemented under great time pressure, to be able to merge the customer lists then. If that happens, there is considerable risk of something going wrong, resulting in the loss of customers.

It would of course be possible to replicate the human brain’s approach of looking at data in their individual context. But that would make things unnecessarily complex, plus require a different approach for each type of data. So we help ourselves with a technical ID that is guaranteed to be unique. Generating such an ID is surprisingly complex, once you realize what the algorithm needs to accomplish:

  • Be fast: There are many scenarios where you need to create tens of thousands of UUIDs per second (e.g. high-frequency trading, payments processing, telco billing, etc.). But “randomness” usually requires the use of cryptographic functions, which are notoriously expensive operations. In recent years this has become less of a concern, though, since many CPU now offer dedicated support here.
  • Be unique across all computers that are involved with the application: While it is probably rarely a problem if two identical IDs are issued for two completely disparate organizations (ignoring scenarios like EDI), there are many cases where it is still highly relevant. Most critical applications run on more than one computer for high-availability and load-balancing purposes. So obviously there must never be a case where IDs clash. Also, it would likely cause problems if the same ID existed not only on the production system but also on a development or test system.
  • Be relatively short: Many UUIDs are between 30 and 40 characters long, which is really not long, given that it is guaranteed there will never be a clash.

Let’s now look into the use of UUIDs. Apart from pretty obvious things like the aforementioned customer etc., they are used in very many systems for internal purposes. A good example are relational database managements systems, where each record (aka row) has its own ID. The same is true for messaging system (think JMS or MQTT).

The two core use-cases I see for those internal IDs are fault diagnosis and linking data. In today’s world most systems are highly distributed, even without the use of a micro-services architecture (which increases the level of distribution by orders of magnitude). To track a business transaction across multiple systems, you need to be able to identify these sub-transactions and the means for this are UUIDs. Ideally you have an operations console that automatically connects things between systems. In reality, though, there is often a lot of manual work to be done.

Another example of linking data together is master data management (MDM). Many organizations have done something in that area and most have failed. The core reason in my view is the approach. It is a business problem that is very closely linked with many technical challenges. And most organizations are bad at dealing with such a combination. There are more aspects, but I will cover those in a separate article.

Back to UUIDs. It might be tempting to leverage internal IDs (e.g. from a database system) for your application. But be warned, this is a very dangerous road. Those IDs are guaranteed to be unique only in the context, for which they are created, but not outside. Even more critical is using just a part of the IDs, because the rest seems to be a fixed value. I have seen a business-critical end-user application where part of the database’s row ID  (Oracle Database v7) was used. Later the database was migrated to a higher version (Oracle Database v8) where the UUID algorithm had be changed. So the sub-string of the row ID was suddenly not unique anymore. The end-user application did not expect duplicates and crashed immediately after starting.

While at the subject of databases, there are people who like to use sequences as UUIDs. Sequences are numbers, which the database auto-increments and they seem a convenient and efficient way to obtain a unique ID. But there are various problems with that approach. Firstly, the ID is only unique within a single database instance. This typically creates all sorts of problems for testing the code, and also when moving it to production. Secondly, this kind of feature, while available in many database systems, is a proprietary extension of SQL. So you create yourself unnecessary problems for using different systems. Many organizations have standardized on one database system for production use. Having to use this also for DEV, CI, SIT, UAT, etc. may make things more difficult than necessary. More importantly, though, it increases the vendor lock-in with all the associated issues.

Let me finish with timestamps. They are the original sin of UUIDs. Really. People like them because they are human-readable, allow easy sorting of transaction into the order of processing, and just seem to be THE obvious way to go. But they are not unique! If your development machine is slow enough, relative to the transaction’s processing time, you may indeed not have issues. But that is only because at least one millisecond (you don’t use a resolution of seconds, do you?) goes by between transactions. A production machine, however, will likely be much faster. And what if multiple machines are working in parallel?

In one case I have seen there was considerable data loss, because someone had been clever enough to use a timestamp with a resolution of only seconds as the filename for writing PDFs into a directory. From there an archiving solution then picked them up for storage to fulfill a legal requirement. This guy’s notebook had been slow enough (it was in the early 2000s) that all files had been several seconds “apart”. But the production machine was a beefy server and it took several weeks until someone realized what had happened. Tens of thousands of documents were lost forever.

I hope this quick overview provided some value to you and will help you in the next discussion on why you really need a proper UUID.

My 2020 Setup for LaTeX

Here is a short write-up of my current LaTeX setup. Since I sometimes need to process documents on Linux systems (usually in a CI/CD context) the natural choice for me these days is TeX Live on Windows.

My preferred editor is probably less common, especially on Windows: Emacs. I have been using it for more than 20 years and with the right add-ons (AUCTeX and RefTeX) it is still the best LaTeX editor for me. Would I recommend it to someone today who does not already know how to use Emacs? Probably not, given the learning curve. But in the late 1990s there was no real alternative on Linux. And LaTeX on Linux it had be for creating high-quality graphics with Xfig and replace text in the EPS files with full-blown LaTeX code for amazing formulas etc.

But let’s go back to the present time. Here is what I did:

  • Download Windows installer for TeX Live
  • Start installer with administrator rights (right-click) and accept all default settings, then wait a really long time (more than three hours on an old Lenovo Thinkpad T520)
  • Install Emacs. I still have EmacsW32 lying around (you need to fix some security settings), but it is no longer available for download. If you look for an alternative, perhaps you find something here.
  • Install Sumatra PDF. The critical feature for me is that it does not hold a write-lock on the file. So when the output PDF is updated in the background by latexmk, it does not cause any problems. I did the installation as administrator and changed the location to C:\Program Files\SumatraPDF because I personally prefer it that way.

That’s all. Enjoy writing 🙂