Category Archives: Technical Stuff

My Journey with Data and Backups

I started with PCs in 1990 on a 286 with a 42 MB hard disk (Seagate ST251-1), which about one year later had issues with faulty sectors. This was a couple of years before (consumer) hard disks started to internally re-map bad sectors. And it was also the first and last time that I lost data. Ever since, I have been paranoid about backups (and more importantly restores).

I started with simple floppy disks for source code, spreadsheets, etc. and went for a DAT streamer in 1996 (HP C1536). This only lasted 3 years and after that abysmal experience I switched to a QIC streamer (Tandberg SLR-24), which lasted until about 2008. Well, that’s when I took it out of service.  It was in perfect working condition but 12 GB capacity per cartridge started to be an issue. Since then I have used hard disks in various ways, since streamers have become a prohibitive upfront investment for me. I would still prefer streamers, but that is a different story.

All the people I know (incl. at work) initially think of my efforts as overkill. Until they loose 10 years of digital pictures, esp. when their children are involved. That is when they are willing to invest time and money. The same goes for many companies, unfortunately. A friend told me about a malware attack on his employer about a year ago. All of a sudden there was budget for keeping backups longer than just 30 days, a properly segmented network, and other things their IT department had wanted for more than a decade. Everybody (incl. me – see above) has to learn this the hard way, I guess.

A side note on NAS gear that is typically more in the consumer space. I am currently in the process of switching to a new FreeNAS box. There were long deliberations as to whether I should go for Synology instead. The core reason why I stayed with FreeNAS is that it I have flexibility. From a usability and ease-of-use perspective I got the impression that Synology is (far?) superior. But that comes at the price of limitations. A mass market product needs to keep support tickets under control and the only way for that is to constrain people’s options. And I wanted to stay flexible, even if that meant to spend more money (hardware specs are considerably higher than the Synology model in question) and time for setting things up.

Finally, I am not going for TrueNAS 12 right now but start with FreeNAS 11.3 U5. Yes, I have seen and read many highly positive comments about v12 and how stable it is. But IMHO nobody can be really sure for at least a couple of weeks that no hidden errors exist.

How much true innovation is there in IT?

One thing I hear quite often from people, when they learn that I work in IT, is that in their view the speed of change is so high. And how can I keep up with all these completely new things popping up all the time …

Well, not so much is really fundamentally new. Most of the changes we see are incremental (or evolutionary to use a different term). I was aware of this for hardware and various aspects of software. But for programming languages the extent of old ideas coming up as the “new hot stuff” surprised me. Robert C. Martin has made a video about this (see below). Its style is not really my cup of tea, but it has a lot of interesting information.

FreeNAS 11.1 U7: Install Syncthing in Jail

As part of moving to a new FreeNAS box, I want to replicate data from the old (nas2, running FreeNAS 11.1 U7) to the new (nas3, running FreeNAS 11.3 U5) machine. During the initial phase nas2 will still be my primary storage location. Think of this as something like a burn-in to ensure that there are no dead-on-arrival components in the new box, esp. hard disks of course. This is planned to last for at least two months and I want all my data synchronized constantly.

The solution I laid my eyes on is Syncthing and I want to run it in a FreeNAS jail on both systems. On the new system the installation was smooth, but on nas2 it was not possible to even create a jail. It turned out to be a setting that had not been migrated from the original FreeNAS 9.3 installation, which had been the initial version of FreeNAS on nas2.

All that had to be done was fix the “Collection URL” setting in the jails configuration as shown below.

  1. Go to “Jails / Configuration”
  2. Switch to “Advanced Mode”
  3. Make sure that the URL contains “11.1” (was “9.3” before on my system)

The next step was to install Syncthing with pkg. The problem with FreeNAS 11.1 is that the underlying FreeBSD is no longer maintained (EOL) and therefore no package repository exists for this version. The workaround is to forcibly switch to an existing repository, even if it does not match the FreeBSD version. I am ok with that, as long as only applications and not OS tools are installed (you should carefully think, whether this is also ok for you!). To do this, issue the following command:

# pkg bootstrap -f

You will get a warning about different OS versions and need to confirm that you want to continue. Once this is complete, install Syncthing with

pkg install syncthing

You get the same warning as just before and need to confirm the installation.

[..]
[syncthing] [1/1] Fetching syncthing-1.10.0.txz:  99%   16 MiB   1.0MB/s    00:0
[syncthing] [1/1] Fetching syncthing-1.10.0.txz: 100%   16 MiB   1.0MB/s    00:1
6                                                                               
Checking integrity... done (0 conflicting)                                      
[syncthing] [1/1] Installing syncthing-1.10.0...                                
===> Creating groups.                                                           
Creating group 'syncthing' with gid '983'.                                      
===> Creating users                                                             
Creating user 'syncthing' with uid '983'.                                       
===> Creating homedir(s)                                                        
[syncthing] [1/1] Extracting syncthing-1.10.0: 100%                             
root@syncthing:/ # 

From here, you can just continue with the normal process of setting thins up. A good starting point might be the following YouTube video.

Microsoft Defender ATP on Linux requires systemd

There seems to be documentation issue with Microsoft Defender ATP for Linux. The system requirements, as far as I can see, do not mention that systemd is needed. I found this on a Debian 9 (Stretch) system that was configured with SysV init. The post-install script of mdatp performs some tests that use the systemctl command, which is of course missing without systemd.

Update: Microsoft has meanwhile confirmed that systemd is indeed required.

Chef Infra Server Moving to Cloud

As part of a blog post about the new v14 of Chef Infra Server, it was announced that from now on existing functionality will be deprecated in favor of the cloud version. It will be interesting to see how this works out. Personally, I have never been a friend of forcing customers off an existing product. It is a dangerous move that bears the risk of customers switching the vendor entirely. Especially so, if it comes with a major architectural shift like from on-premise to cloud.

I have been a happy user of Chef Server for about five years now, although only for a very small number of machines (single digit). The decision for Chef had been made at a time when Ansible was still in its early stages. But with this latest development I will need to move away from Chef. It is pity, because I really like the tool and have done various custom extensions.

Installing ecoDMS 18.09 on Debian 10.5

I had recently installed ecoDMS 18.09 on a Debian 10.5 VM and it was a pleasant experience overall. However, the following things had to be done differently compared to the installation manual

  • Install gnupg via sudo apt-get install gnupg (this seems to be installed out-of-the-box on Ubuntu)
  • Do not install any Java environment but let this be handled by the normal dependency management

The system is currently in light use (still in testing) for my newly founded company and runs quite well. The VM is hosted on ESXi 6 that runs on a Celeron 3900 (yes, two cores) and for a single user with just a few documents stored the performance is really nice.

I so far intend to stay with that system and will keep you updated.

 

Windows 10: Restore Bootloader after Linux Test

I had recently installed Linux in a dual-boot setup on a test machine (an old Lenovo Thinkpad T430). What proved more difficult than in former times was to restore the original state. Most of the recommendations I found online were less than helpful. In particular, many of them ignored the fact that there are two entirely different approaches out there to handle the boot: UEFI and legacy or GPT and MBR respectively. My machine was using MBR (Master Boot Record), given its age.

What finally solved the issue was the following command:

C:\> bootsect /nt60 c: /mbr

I used a USB stick with Windows 10 Installer, but since then learned that you can get to the “repair” console easier, if your Windows 10 still starts. All you need to do is perform the following steps:

  • Log off.
  • When the login screen appears, press a key so that the password field shows up. This will also enable the “power” button in the lower right corner of the screen.
  • Press and hold shift
  • Left-click the power and choose “Restart”
  • Let go of the shift key and the repair menu appears.
  • Go to Troubleshoot > Advanced options > Command Line

“You are not done, when it works”

The title is a quote from Robert C. Martin during a conference talk on clean code. For a long time now (I am getting old) I have followed this approach and it has produced remarkable results. For a bit more than four years I had been responsible for a corporate integration platform. I had built and run it following DevOps principles so that everything was fully automated. No manual maintenance work had been necessary at all, log files were archived, deleted after their retention period etc. This had freed up a lot of time. Time which I used to keep my codebase clean.

Particular focus was put onto the structure. And that had been a really tough job. Much tougher than I anticipated, to be quite frank. But I had paid off. Instead of writing a lot of conventional documentation, the well thought-out structure allowed me to find stuff in a completely intuitive way. Because, believe me, six months after you have written something, you do not remember much about it. But if you need a certain kind of functionality and do not only find the corresponding module immediately, but also from a first look make a correct guess how the parameters are meant to be used, that is truly rewarding.

Having experienced this first hand has greatly influenced my work since then. And I am more convinced then ever, that this is not beautifying for the sake of it. But instead it is a mandatory requirement and a prerequisite for business agility.