Category Archives: Technical Stuff

Installing Pi-hole on top of dnsmasq

Pi-hole is a great and easy addition to security. The automated installation is a nice thing, but did not work on my system. The core of the problem was that I wanted to install it on a Rasperry Pi that already had dnsmasq running  on it. And of course that dnsmasq was configured to provide DNS services to the Pi. What then happened was that, as an early step of the installation, the dnsmasq daemon was stopped. Logically, the Pi-hole download that was supposed to to take place, did not work.

After a few tests the following approach showed to work

  • Disable dnsmasq via sudo systemctl stop dnsmasq.service
  • Update /etc/resolv.conf to point to the DNS server in my router or something like 1.1.1.1 (dnsmasq sets it to 127.0.0.1 on every start or stop)
  • Start the Pi-hole installation normally

Enjoy!

Displaying Your Terminal Sessions

We are used to terminal sessions displayed like any other content, i.e. as a video of some kind. The latter comes with two caveats, though. It uses a lot of bandwidth (compared to the actual terminal session) and is often difficult to read. Because many people do not bother to magnify the terminal either during the actual presentation or as part of the post-processing.

A very interesting alternative, especially for blog posts, is asciinema. It works on Linux, macOS and various BSD flavors. What it basically does is open a special shell (Bash-like) that simply records all keystrokes (plus the responses) in a VT-100 compatible format. So you end up with a small text file that can be replayed using a JavaScript snippet.

In addition to using the hosted instance, that supports sharing in multiple ways, you can also run your own server via Docker. For all the details, please go to the website.

Visual Studio Code and Bash on Windows

A while ago I started using Visual Studio Code and it is a great tool for many things. Primary use-cases for me are Chef recipes and SFDX at the moment, for both of which there are extensions in the VS Code marketplace. Also, I like to use Bash (the one that came with Git for Windows) in the terminal .

But at least on Windows (this seems to be different on Mac OS) some of the standard keyboard shortcuts for Bash are used by VS Code itself. Relevant for me personally were Ctrl-A, Ctrl-E, and Ctrl-K. To make bash usable for me, I had to “undefine” those shortcuts in VS Code, but only for the terminal.

Here are the necessary additions to  keybindings.json :

// Place your key bindings in this file to overwrite the defaults

[
 { "key": "ctrl+a",
   "command": "",
   "when": "terminalFocus" },
 { "key": "ctrl+e",
   "command": "",
   "when": "terminalFocus" },
 { "key": "ctrl+k",
   "command": "",
   "when": "terminalFocus" },
]
Hope that helps!

Linux and Me

Although it is not a focal point of my professional live any more, Linux and I share a long common history. My first encounter was with “S.u.S.E. Linux August 1995″ (kernel 1.1.12) in October or November of 1995. Until then I had only played around a bit with MINIX 1.5 on a  80286-based PC. But it had been a bit of a disappointment for me and I never really got into things.

This changed dramatically with Linux. I spent many hours trying to get a setup with X11 (only FVWM2 was available as window manager then) to work. This was not as easy as today. There was no, or very little, support from YaST, the setup tool of SuSE. And one had to be careful with the monitor (CRT of course) configuration. Because too high frequencies could physically damage your monitor. I never used the system to actually perform any work, it was solely for learning.

This changed with SuSE 4.4 and even more so with SuSE 6.1. I do not remember the exact dates, but SuSE 6.1 was installed in late 1997 when I ran my own small company as a “side-project” to my university studies. But since the company was highly successful, I soon used things like HylaFax on Linux and of course also ran my local mail server (sendmail can be a challenge, especially when Google does not exist yet). There even was a dial-in modem for terminal connections.

What I always disliked about SuSE, was that the updates from one version to another never really worked for me. So in about 2002 I finally decided to switch to Debian Woody (3.0). That was again a learning curve, but the apt packaging system was vastly superior to what I had seen on SuSE before and I never had problems with upgrades. The experience gained then is also quite helpful these days for the Raspberry Pi.

Things continued with CentOS, Fedora, and Ubuntu – with the last two powering my company notebook for a short while back in 2008. But the experience was mixed and I never liked Linux as a general-purpose desktop operating system. I had used it extensively for writing LaTeX documents (using Xfig a lot for diagrams) during my university time. But other than that, for too many Windows (or later Mac) applications I never found a proper replacement.

But for servers Linux is definitely my preferred OS. And especially so because most of the innovations of the last years, like configuration management systems (e.g. Chef, Puppet, Ansible) and containers (e.g. Docker), came to live on Linux.

Ignoring Warnings in Log Files

I know that many people think along the lines of ignoring warnings in log files. But I am responsible for an environment where we  run applications that are sort-of business critical. So I need to take warnings seriously, especially if their wording is not really clear. The consequence is that any unnecessary warning causes operational problems. Because who can tell me, kind-of “written in blood”, that I can absolutely always and forever ignore this warning? Only in that case could I consider adding an exception to the log monitoring system and of course to the system documentation, the operations manual, etc. So for me, and from my consulting past I know many customer think the same, this is not just a small nuisance but a real issue.

On the other hand I have had many discussions with people who told me that I could just ignore this or that entry. In many cases it turned out after some discussion, that the log level was actually chosen badly and INFO would have been more appropriate. In that respect the semantics of the commonly used log levels deserve a closer look. Here are two good links (link 1, link2) for definitions. When I first read them, my initial thought was that I might have overreached with the first paragraph of this post. But looking at the example from link 1 about WARN a bit closer, I think my concerns are still valid.

So what can be done? Reality is that you rarely have the ability to get a log statement changed. So you do need a scalable approach to deal with log messages that you consciously choose to ignore. It involves primarily two things: Firstly, you need to have documentation why the decision was made that a given log message is not critical. Secondly, there should be an automated link with your log file monitoring system, that configures an exception in it. Depending on your business this whole area might also be regulated, so the legal side may very well play a role as well. But that is outside the scope of this post.

I know this post is not really actionable, but still wanted to share my thoughts.

 

How to Implement Test-Driven Development

Test-Driven Development (TDD) is something I have long had difficulties with. Not because I consider it a bad concept, but found it very difficult to start doing. In hindsight it appears that the advice given in the respective books and online articles was not suitable. So here is the approach that finally worked for me.

It boils down to deviating from the pure doctrine. Instead of writing a test before starting on a new piece of code, I start with the actual code right away. Yes, that violates the core principle, although only for a while. But I have found that in most cases my understanding of the problem is still somewhat vague when I start working on it. So for my brain it is better if I do not have to split its capacity between solving the actual problem and thinking about how to devise a proper test and what all that means for the structure of the future code.

Once the initial version of the working code is there and manually validated, I do add the test. From then on I am in a position to refactor the code without the risk of breaking something. And of course this refactoring is needed because the first version of any code is never really good. While you could write “better” initial code, this would require spending more time upfront than you otherwise need for refactoring later. And it also ignores the fact that you only really understand the problem, when you have finished implementing the solution.

What I later realized was that my approach also helped me to write more testable code. But instead of consciously having to work on it, this sneaked in as a by-product of my modified way of doing TDD. For me this is a more natural way of learning and the results are typically better than following some formal approach.

Structuring a VCS Repository

My main programming hobby project will soon celebrate its tenth birthday, so I thought a few notes on how I structure my VCS repository might be of interest. The VCS I have been using since the beginning is Subversion. (When I started, Git had already been released, but was really not that popular yet.) So while some details of this article will be specific to Subversion, the general concepts should be applicable elsewhere as well.

When it comes to the structure of a repository, Subversion does not impose anything from a technical point of view. All it sees is a kind-of file system, with all the pros and cons that come with the simplicity of this approach. It makes it easy for people to start using it, which is really good. But it also does not offer help for more advanced use-cases, so that people need to find a way how to map certain requirements onto that file system concept.

As a result, a convention has emerged and been there for many years now. It says that at the top-level of the project there should be only the following folders:

  • trunk: Home of the latest version (sometimes called the HEAD revision)
  • branches: Development sidelines where work happens in isolation from trunk
  • tags: Snapshots that give meaningful names to a certain revision

You will find plenty of additional information on the subject when searching the Internet. I can also recommend the book “Pragmatic Version Control: Using Subversion“, although it seems to be out of print now.

With these general points out of the way, let me start with how I work on my project. There are only a few core rules and despite their simplicity I can handle all situations.

  • The most important aspect for the structure of my SVN repository is that all active development on the coming version happens at trunk. See this article for all the important details, why you really want to follow that approach in almost all cases.
  • Once a new version is about to be released, I need a place where bug-fixes can be developed. So I create a release branch (e.g. /branches/releases/v1.3) with major and minor version number but not the patch version (I use semantic versioning). From this release branch I then cut the release (v1.3.0 in this case) by pointing the release job of my CI server to the release branch.
  • Once the release is done, I create a tag that also includes the patch version. In this example the tag will be from /branches/releases/v1.3 to /tags/releases/v1.3.0 .
  • Now I return to working on the next release (v1.4) by switching back to trunk.
  • Bug fixing happens primarily on trunk with fixes being back-ported to released versions. There are cases when this is not practical, of course. The two main reasons are that significant structural changes were done on trunk (you do refactor, don’t you?) or another change has implicitly removed the bug there already. But that is the exception.
  • When a bug-fix is needed on a released version, I temporarily switch my working copy to the release branch and do the respective work there. Unless the bug is critical I do not release a new version immediately after that, though. So this may repeat a few times, before the maintenance release.
  • The maintenance release is then cut, again, from /branches/releases/v1.3 . And after that a new tag is created to /tags/releases/v1.3.1 .

Those rules have proven to be working perfectly and I hope they will continue to do so for the next ten years. I have been quite lucky in that, although for me this is still a hobby project, the result is used by many global companies and organizations in a business-critical context. There are at least 11.000 installations in production that I am aware about, so I cannot be casual about reliability of the delivery process.