Why Companies Fail on Technical Careers

I cannot say how often I read in some company’s image brochure that management and expert career paths are treated equal. There must be companies on our planet where this is true, but I have yet to see it in practice. Why is that? (Unless mentioned otherwise this article is about the IT industry with a strong focus on consulting and software companies.)

In my experience the core reason is a fundamental lack of understanding from the non-technical folks that make the relevant decisions, i.e. management. They have a vague impression of techies that is often dominated by perceived social deficiencies (“They can’t look me in the eye”). While the latter is sometimes true, how can it be a measure of someone’s qualification to perform a complex technical task? And perhaps there is actually a good reason for averting my eyes. I cannot speak for others, but when I have a complicated discussion, it helps me focus all my mental capacity on the problem at hand when I look onto the table.

I would speculate that, as in so may other cases, the heart of the problem lies with mid-level management (where the details are determined) as opposed to the C-level. Taking into account the Peter principle, what does that say about middle management? Yes, this a strong simplification and it does many people injustice. I have personally met quite a few people on this level that I consider great at their job, particularly including leadership skills. But just as well did I encounter folks who cannot think out-of-the-box and blindly follow rules and processes. And it is this capability to see new ways, make realistic projections about what will (not) work, and lead by example “where no-one has gone before”.

A particularly interesting story for me was the encounter with an HR person where we talked about setting up an initiative to advance technical folks (what is often called a high-potential program). My idea was met with friendly resistance and the argument was made that contrary to management, the technical side is so much more diverse that it would be virtually impossible to have a single program. In fact this HR person seriously thought that every techie needed his or her completely individual program.

My counter argument then was that this would not be a technical program (learn programming language XYZ) but one for professional and personal development. So we talk about e.g. how to present a complex technical scenario to upper management for getting a budget approval. Or how to engage with a customer and instill confidence that a given product is the right choice to solve their business problem. And all this with being authentic and building upon the strong technical knowledge one has.

I don’t want to appear as simply bashing HR here. But the underlying problem was that assumptions had been made about what would be suitable for technical folks without ever having spoken with any of them. Would this happen the same way for management staff?

From a more abstract perspective it comes down to the fact that many decisions are made based on personal trust and relationships, rather than on processes and policies. So what is needed is that the relevant management and technical staff (on every hierarchical level) talk to each other in the spirit of equality. And here both sides fail miserably too often.

Installing Pi-hole on top of dnsmasq

Pi-hole is a great and easy addition to security. The automated installation is a nice thing, but did not work on my system. The core of the problem was that I wanted to install it on a Rasperry Pi that already had dnsmasq running  on it. And of course that dnsmasq was configured to provide DNS services to the Pi. What then happened was that, as an early step of the installation, the dnsmasq daemon was stopped. Logically, the Pi-hole download that was supposed to to take place, did not work.

After a few tests the following approach showed to work

  • Disable dnsmasq via sudo systemctl stop dnsmasq.service
  • Update /etc/resolv.conf to point to the DNS server in my router or something like 1.1.1.1 (dnsmasq sets it to 127.0.0.1 on every start or stop)
  • Start the Pi-hole installation normally

Enjoy!

Displaying Your Terminal Sessions

We are used to terminal sessions displayed like any other content, i.e. as a video of some kind. The latter comes with two caveats, though. It uses a lot of bandwidth (compared to the actual terminal session) and is often difficult to read. Because many people do not bother to magnify the terminal either during the actual presentation or as part of the post-processing.

A very interesting alternative, especially for blog posts, is asciinema. It works on Linux, macOS and various BSD flavors. What it basically does is open a special shell (Bash-like) that simply records all keystrokes (plus the responses) in a VT-100 compatible format. So you end up with a small text file that can be replayed using a JavaScript snippet.

In addition to using the hosted instance, that supports sharing in multiple ways, you can also run your own server via Docker. For all the details, please go to the website.

You are not faster with an 80% solution

When the schedule is tight for a software development project, one of the first things non-technical people contemplate, is reducing quality and/or scope and go for an 80% solution. This can work but only under particular circumstances.

A recent discussion I had with some folks was about such a situation. A very tight deadline had been issued by senior management and multiple departments had to collaborate on a solution that spawned multiple parts of the organizations and their core IT systems. Those systems (think about something like ERP and CRM) have a multitude of connections, grown over many years without governance, but not for the data that were required now.

The people were all quite supportive and a constructive discussion took off. Not surprisingly, finding a common language took some time. Especially so because only after a while did we realize that in several cases the same term had a different meaning. In the end there was a particular detail where no obvious solution was available.

I was responsible for the technical implementation on one of the two systems involved. My counterpart on the business side and I discussed various options and and eventually he came up with the idea to just safe time by going for the 80% solution. We talked about a number of aspects and it soon became clear to me that this would actually require more time than to just “do it properly”.

This sounds, at the very least, counter-intuitive for most people. So let me illustrate my point with a non-technical equivalent. Many years ago I had to prepare for my then-boss a list with press clippings every day. It basically meant to go through a long list, that was created automatically, and remove everything he was not interested in. This was quite time-consuming and he was genuinely surprised when he found out.

His statement then was “Why do you need so much time, it is only 20 of the 200 articles that are relevant”. That was of course correct. But what he had not thought about was that I had to go through the list of all 200 articles and check some in more detail than just looking at the headline. So it did not really make a difference whether I cut the list down to 100 or 60 or 20 articles, the time required stayed more or less the same. He agreed on the point and quite soon decided there was better use for my time.

It is exactly the same with taking shortcuts during a technical implementation. The easy part is to decide that you want to cut 20% off the next release. But things get tricky when it comes to deciding which 20% to skip without any undesired side-effects. In order to be able to do that, you need to understand in depth what the consequences would be. So quite often you spend more time trying to understand potential ramifications for various scenarios, than it would take to simply do it properly right from the start.

And that is only the beginning of the problem. You need to document the results of your analysis really carefully, if you want to be sure that nothing is missed when the first code change is required. Even a seemingly minor bug fix has an increased potential to break something if the current overall solution is built on a set of assumptions. And adding enhancements or new features will be even more “fun”. Because people have to read through all the documentation and also understand it. Together this is a time-consuming task with considerable risk to still overlook something (or hit an edge-case that was not documented).

But in reality this documentation does not exist anyway because you wanted to save time in the first place. And detailed documentation is not exactly supportive here. So people throw in bits and pieces and you end up with a document that is typically even worse than most project documentation that is out there. So whoever has the pleasure to work on the code without having all the details in mind is really screwed. That includes you, by the way, because you will have forgotten most after only a few weeks.

The solution is usually a re-implementation because it is faster than to fix the broken code. Of course that re-implementation often needs additional logic to handle data migration for transactions that were completed with the 80% solution and now ruin your data quality. But hang on, that might just be 20% effort you can safe right now ….

Visual Studio Code and Bash on Windows

A while ago I started using Visual Studio Code and it is a great tool for many things. Primary use-cases for me are Chef recipes and SFDX at the moment, for both of which there are extensions in the VS Code marketplace. Also, I like to use Bash (the one that came with Git for Windows) in the terminal .

But at least on Windows (this seems to be different on Mac OS) some of the standard keyboard shortcuts for Bash are used by VS Code itself. Relevant for me personally were Ctrl-A, Ctrl-E, and Ctrl-K. To make bash usable for me, I had to “undefine” those shortcuts in VS Code, but only for the terminal.

Here are the necessary additions to  keybindings.json :

// Place your key bindings in this file to overwrite the defaults

[
 { "key": "ctrl+a",
   "command": "",
   "when": "terminalFocus" },
 { "key": "ctrl+e",
   "command": "",
   "when": "terminalFocus" },
 { "key": "ctrl+k",
   "command": "",
   "when": "terminalFocus" },
]
Hope that helps!

Linux and Me

Although it is not a focal point of my professional live any more, Linux and I share a long common history. My first encounter was with “S.u.S.E. Linux August 1995″ (kernel 1.1.12) in October or November of 1995. Until then I had only played around a bit with MINIX 1.5 on a  80286-based PC. But it had been a bit of a disappointment for me and I never really got into things.

This changed dramatically with Linux. I spent many hours trying to get a setup with X11 (only FVWM2 was available as window manager then) to work. This was not as easy as today. There was no, or very little, support from YaST, the setup tool of SuSE. And one had to be careful with the monitor (CRT of course) configuration. Because too high frequencies could physically damage your monitor. I never used the system to actually perform any work, it was solely for learning.

This changed with SuSE 4.4 and even more so with SuSE 6.1. I do not remember the exact dates, but SuSE 6.1 was installed in late 1997 when I ran my own small company as a “side-project” to my university studies. But since the company was highly successful, I soon used things like HylaFax on Linux and of course also ran my local mail server (sendmail can be a challenge, especially when Google does not exist yet). There even was a dial-in modem for terminal connections.

What I always disliked about SuSE, was that the updates from one version to another never really worked for me. So in about 2002 I finally decided to switch to Debian Woody (3.0). That was again a learning curve, but the apt packaging system was vastly superior to what I had seen on SuSE before and I never had problems with upgrades. The experience gained then is also quite helpful these days for the Raspberry Pi.

Things continued with CentOS, Fedora, and Ubuntu – with the last two powering my company notebook for a short while back in 2008. But the experience was mixed and I never liked Linux as a general-purpose desktop operating system. I had used it extensively for writing LaTeX documents (using Xfig a lot for diagrams) during my university time. But other than that, for too many Windows (or later Mac) applications I never found a proper replacement.

But for servers Linux is definitely my preferred OS. And especially so because most of the innovations of the last years, like configuration management systems (e.g. Chef, Puppet, Ansible) and containers (e.g. Docker), came to live on Linux.

Ignoring Warnings in Log Files

I know that many people think along the lines of ignoring warnings in log files. But I am responsible for an environment where we  run applications that are sort-of business critical. So I need to take warnings seriously, especially if their wording is not really clear. The consequence is that any unnecessary warning causes operational problems. Because who can tell me, kind-of “written in blood”, that I can absolutely always and forever ignore this warning? Only in that case could I consider adding an exception to the log monitoring system and of course to the system documentation, the operations manual, etc. So for me, and from my consulting past I know many customer think the same, this is not just a small nuisance but a real issue.

On the other hand I have had many discussions with people who told me that I could just ignore this or that entry. In many cases it turned out after some discussion, that the log level was actually chosen badly and INFO would have been more appropriate. In that respect the semantics of the commonly used log levels deserve a closer look. Here are two good links (link 1, link2) for definitions. When I first read them, my initial thought was that I might have overreached with the first paragraph of this post. But looking at the example from link 1 about WARN a bit closer, I think my concerns are still valid.

So what can be done? Reality is that you rarely have the ability to get a log statement changed. So you do need a scalable approach to deal with log messages that you consciously choose to ignore. It involves primarily two things: Firstly, you need to have documentation why the decision was made that a given log message is not critical. Secondly, there should be an automated link with your log file monitoring system, that configures an exception in it. Depending on your business this whole area might also be regulated, so the legal side may very well play a role as well. But that is outside the scope of this post.

I know this post is not really actionable, but still wanted to share my thoughts.