Martin on something other than performance. Very interesting stuff in my view.
Here is a short write-up of my current LaTeX setup. Since I sometimes need to process documents on Linux systems (usually in a CI/CD context) the natural choice for me these days is TeX Live on Windows.
My preferred editor is probably less common, especially on Windows: Emacs. I have been using it for more than 20 years and with the right add-ons (AUCTeX and RefTeX) it is still the best LaTeX editor for me. Would I recommend it to someone today who does not already know how to use Emacs? Probably not, given the learning curve. But in the late 1990s there was no real alternative on Linux. And LaTeX on Linux it had be for creating high-quality graphics with Xfig and replace text in the EPS files with full-blown LaTeX code for amazing formulas etc.
But let’s go back to the present time. Here is what I did:
- Download Windows installer for TeX Live
- Start installer with administrator rights (right-click) and accept all default settings, then wait a really long time (more than three hours on an old Lenovo Thinkpad T520)
- Install Emacs. I still have EmacsW32 lying around (you need to fix some security settings), but it is no longer available for download. If you look for an alternative, perhaps you find something here.
- Install Sumatra PDF. The critical feature for me is that it does not hold a write-lock on the file. So when the output PDF is updated in the background by latexmk, it does not cause any problems. I did the installation as administrator and changed the location to
C:\Program Files\SumatraPDFbecause I personally prefer it that way.
That’s all. Enjoy writing 🙂
Not the talk you would expect on a software development conference: What the Sioux City plane crash can teach us for (crisis) management.
A while ago Chef Software announced that they would move all source code to the Apache 2.0 license (see announcement for details), which is something I welcome. Not so much welcomed by many was the fact that they also announced to stop “free binary distributions”. In the past you could freely download and use the core parts of their offering, if that was sufficient for your needs. What upset many people was that the heads-up period for this change was rather short and many answers were left open. It also did not help that naturally their web site held many references to the old model, so people were confused.
In the meantime it seems that Chef has loosened their position on binary distributions a bit. There is now a number of binaries that are available under the Apache 2.0 license and they can be found here. This means that you can use Chef freely, if you are willing to compromise on some features. Thanks a lot for this!
This post will describe what I did to set up a fresh Chef environment with only freely available parts. You need just two things to get started with Chef: the server and the administration & development kit. The latter goes by the name of ChefDK and can be installed on all machines on which development and administration work happens. It comes with various command line tools that allow you to perform the tasks needed.
Interestingly, you will find almost no references to ChefDK on the official web pages. Instead its successor “Chef Workstation” will be positioned as the tool to use. There is only one slight problem here: The latest free version is pretty old (v0.4.2) and did not work for me, as well as various other people. That was when I decided to download the latest free version of ChefDK and give it a try. It worked immediately and since I had not needed any of the additional features that come with Chef Workstation, I never looked back.
No GUI is part of those free components. Of course Chef offer such a GUI (web-based) which is named Chef Management Console. It is basically a wrapper over the server’s REST API. Unfortunately the Management Console is “free” only up to 25 nodes. For that reason, but also because its functionality is somewhat limited compared to the command line tools, I decided to not cover it here.
Please check the licenses by yourself, when you follow the instructions below. It is solely your own responsibility to ensure compliance.
Below you will find a description of what I did to get things up and running. If you have a different environment (e.g. use Ubuntu instead of CentOS) you will need to check the details for your needs. But overall the approach should stay the same.
The environment I will use looks like this
- Chef server: Linux VM with CentOS 7 64 bit (minimal selection of programs)
- Chef client 1: Linux VM like for Chef server
- Development and administration: Windows 10 Pro 64bit (v1909)
I am not sure yet whether I will expand this in the future. If you are interested, please drop a comment below.
Please check that your system meets the prerequisites for running Chef server.
The download is a bit tricky, since we don’t want to end up with something that falls under a commercial license. As of this writing (April 2020) the following component binaries are the latest that come under an Apache 2.0 license. I verified the latter by clicking at “License Information” underneath each of the binaries that I plan to use.
- Chef Infra Server: v12.19.31 (go here to check for changes)
- Chef DK: 3.13.1 (go here to check for changes)
As to the download method Chef offer various methods. Typically I would recommend to use the package manager of your Linux distribution, but this will likely cause issues from a license perspective sooner or later.
Server Installation and Initial Setup
So what we will do instead is perform a manual download by executing the following steps (they are a sub-set of the official steps and all I needed to do on my system):
- All steps below assume that you are logged in as root on your designated Chef server. If you use
sudo, please adjust accordingly.
- Ensure required programs are installed
yum install -y curl wget
- Open ports 80 and 443 in the firwall
firewall-cmd --permanent --zone public --add-service http && firewall-cmd --permanent --zone public --add-service https && firewall-cmd --reload
- Disable SELinux
- Download install script from Chef (more information here)
curl -L https://omnitruck.chef.io/install.sh > chef-install.sh
- Make install script executable
chmod 755 chef-install.sh
- Download and install Chef server binary package: The RPM will end up somewhere in
/tmpand be installed automatically for you. This will take a while (the download size is around 243 MB), depending on your Internet connection’s bandwidth.
./chef-install.sh -P chef-server -v "12.19.31"
- Perform initial setup and start all necessary components, this will take quite a while
- Create admin user
chef-server-ctl user-create USERNAME FIRSTNAME LASTNAME EMAIL 'PASSWORD' --filename USERNAME.pem
- Create organization
chef-server-ctl org-create ORG_SHORT_NAME 'Org Full Name' --association-user USERNAME --filename ORG_SHORT_NAME-validator.pem
- Copy both certificates (
ORG_SHORT_NAME-validator.pem) to your Windows machine. I use FileZilla (installers without bloatware can be found here) for such cases.
ChefDK Installation and Initial Setup
What I describe below is a condensed version of what worked for me. More details can be found on the official web pages.
- I use
$HOMEin the context below to refer to the user’s home directory on the Windows machine. You must manually translate it to the correct value (e.g.
C:\Users\chrisin my case).
- Download the latest free version of ChefDK for Windows 10 from here and install it
- Check success of installation by running the following command from a command prompt:
- Create directory and base version of configuration file for connectivity by running
knife configure(it may look like it hangs, just give it some time)
- Add your server’s certificate (self-signed!) to the list of trusted certificates with
knife ssl fetch
- Verify that things work by executing
knife environment list, it should return
_defaultas the only existing environment
- The generated configuration file was named
$HOME/.chef/credentialsin my case and I decided to rename it
config.rb(which is the new name in the official documentation) and also update the contents:
- Remove the line with
[default]at the beginning which seemed to cause issues
knife[:editor] = '"C:\Program Files\Notepad++\notepad++.exe" -nosession -multiInst'as the Windows equivalent of setting the
EDITORenvironment variable on Linux.
- Remove the line with
We will create a very simple project here
- Go into the directory where you want all your Chef development work to reside (I use
$HOME/src; the comment regarding the use of
$HOMEfrom above still applies) and open a command prompt
- Create a new Chef repo (where all development files live)
chef generate repo chef-repo(chef-repo is the name, you can of course change that)
- You will see that a new directory (
$HOME/src/chef-repo) has been created with a number of files in it. Among them is
./cookbooks/example, which we will upload as a first test. Cookbooks are where instructions are stored in Chef.
- To be able to upload it, the cookbook path must be configured, so you need to add to
$HOME/.chef/config.rbthe following line:
- You can now upload the cookbook via
knife cookbook upload example
In order to have the cookbook executed you must now add it to the recipe list (they take the cooking theme seriously at Chef) of the machines, where you want it to run. But first you must bootstrap this machine for Chef.
- The bootstrap happens with the following command (I recommend to check all possible options by via
knife bootstrap --help) executed on your Windows machine :
knife bootstrap MACHINE_FQDN --node-name MACHINE_NAME_IN_CHEF --ssh-user root --ssh-password ROOT_PASSWORD
- You can now add the recipe to the client’s run-list for execution:
knife node run_list add MACHINE_NAME_IN_CHEF example
and should get a message similar to
- You can now check the execution by logging into your client and execute
root. It will also be executed about every 30 minutes or so. But checking the result directly is always a good idea after you changed something.
Congratulation, you can now maintain your machines in a fully automated fashion!
Another video that I found interesting
Springer has made quite a number of book freely available for download. Here is the link to those about computer science.
This is a bit of a follow-up to my recent post Don’t Promote for Performance .
After decades with an ever increasing focus on success, not only in commercial environments, we have all become so accustomed to it, that it feels strange to even take a step back and reconsider the approach. To be clear: I am not advocating a model where competition in and of itself is considered bad. Having seen what socialism had done to Eastern Germany, I am certainly not endorsing this or any similar model.
You may have heard the expression Pyrrhic Victory, which stems from an ancient battle (279 BC) where the winner suffered extreme losses that affected their military capabilities for years. Today it basically means that there had been too high a price for achieving a certain goal. And we see this all the time in today’s world.
Inside organizations it usually manifests as so-called “politics”, whereas I think that back-stabbing is a more suitable term in many cases. Someone tries to get something done at all costs, burning bridges along the way. This happens pretty much on all levels, and all too often people don’t even realize what they are doing. They are so focused on their target (usually because there is money for them in the game) that all “manners” are lost.
You probably had that experience, too, where someone treated you really badly, although they actually needed you in the process. This either means they are not aware of their behavior, i.e. there is total ignorance or simply a lack of self-reflection. Or they believe that by virtue of hierarchy you will have to do as they want now and also in the future.
Overall I think that such actions show unprofessional behavior. Firstly, these folks are basically advertising themselves as ruthless egomaniacs, which is becoming less and less acceptable. Secondly, as the saying goes, you always meet twice in live.
This is not to be mixed up with making an honest mistake that upsets people. Everybody does that sooner or later and I am no exception. If it happens to us, we are angry for a while and then move on. Also, in those cases people will mostly apologize once they realize what happened. For me this is simply civility and it is a vital component for an efficient (and probably also effective) way to interact socially.
The bottom line is that people who don’t treat their coworkers in a decent manner, inflict a lot of damage to the organization. If superiors then look the other way because “the numbers are ok”, they send a clear message that such behavior is actually desired. The outcome is what is called a toxic organization. Would you like to work at such a place?
As a very technical person I have a somewhat unusual view on marketing. I do not buy into the “utterly useless” verdict that some technical folks have on marketing. But I also think that, probably just like us techies, some marketing folks overrate the importance of their domain. And I should probably add here that this post is written with enterprise software as the product category in mind. So naturally, a lot of the details will not match low-price consumer products.
In a nutshell I think that it is marketing’s job to attract (positive) attention of potential buyers. This can happen on several levels, e.g. brand or product marketing, online and print media, etc. It also often includes special events and being present on trade shows. And last but not least, a relatively recent thing is called developer relations, where hard-core technical people are the specific target audience.
All these activities have the common goal to present a coherent and positive message to the (prospective) customer. The different stakeholders have vastly different demands, because of the perspective they take on the product (in this writing that always means services as well) and their background. So, put simply, they all need a message tailored to their need, which, at the same time, must be consistent with all the other versions for the other target channels.
On a high level that is not such a big deal. But at a closer look the different messages should not only be consistent but also be linked together at the correct points. Imagine a conversation where you just told a VP of logistics why your product really provides the value you claim. If you are then able to elegantly look over to the enterprise architect and explain why the product fits nicely into their overall IT strategy, that is a huge plus. And if you can then even bring the IT operations manager on board, with a side note about nice pre-built integrations with ITIL tools, you have done a really great job.
Some people will probably say that the hypothetical scenario above goes beyond marketing. I would say that it is beyond what a typical marketing department does. But the interesting question is where the content for such a conversation comes from. Is it the marketing department that employs some high-caliber people that are capable to bridge the various gaps? Or is it the sales team that has prepared things as an individual exercise (which often means that it is a one-off)?
The choice will greatly influence at least two critical KPIs. Cost of sales and lead conversion rate. The former is rather obvious, because it is about re-use and efficiency. But, as it is so often, the latter is much more critical because here we talk about effectiveness. Or in other words: It hurts much more if the deal is lost after having spent thousands of Euros or Dollars, than if we had to pay an additional 500 bucks to have an additional presentation be made that secures the deal.
This is in fact one of the things where in my view too many people have a predisposition for the wrong thing. Many will gladly jump onto how something could be done better, cheaper, etc. But relatively few will take a step back and ask whether it is the right thing to do in the first place.
The critical thing is that the various marketing messages are consistent with one another and, much more importantly, with the post-sales reality. Putting “lipstick on a pig” is not good marketing but somewhere between bullshitting and fraud. And the most precious thing in customer relationship is trust. So unless you need a deal to literally survive the next few weeks, you should resist the temptation to screw your customer. Word gets around …
A few interesting examples from projects that show what can go wrong with software architectures.
Over the last couple of days we have been seeing a slight shift in the discussion around Corona virus. More and more people are talking about the mid- to long-term implications for our lives, be it on the private level or related to economic consequences. Either area will have to undergo long-lasting changes in behavior, until we have proven medication and vaccination.
The following argument is made based on the numbers we know for Germany as of this writing, and those data contain quite some uncertainty. As to other countries, I think my points are universally valid at their core, because it is next to impossible that such a disease will “behave” differently between countries. Yes, each country is at its own point in time with regard to how many people are infected. Also, the speed at which the virus spreads and therefore the number of fatalities vary due to different measures being invoked etc. But I have a hard time believing that otherwise things will be drastically different.
The core question for most people (and companies alike) is how long things will need to be as they are now. So far measures have been introduced in a pretty incremental way, rather than a big bang approach. The reason for that, as governments have emphasized over and over again, was that decisions had to be made “as we learn new facts” about Corona. While not wrong per se, the latter is certainly not the only rational for doing things this way. More importantly, it introduced people to those radical changes in small doses. There is the saying “nobody likes surprises” and I think this is behind most of the communication we have seen. First things are publicly debated as future possibilities for a while, so people can get used to them. And only after that they are made “rules”. In general this makes a lot of sense, simply to increase acceptance.
Chances are that we will see a “few” more such announcements. The next ones will probably focus more and more on what needs to happen before we can increase direct contact/interaction again. The immediate concern must be to limit the risk of infection. And here the current consensus, meanwhile, seems to be that wearing face masks is of critical importance. Finally I would like to add. Because for too long there had been official(!) statements that masks do not really help the broader population and should only be used by medical personal. This is one of the dumbest arguments I have heard my entire life! Either masks help or not (assuming you know how to use them correctly). This was nothing but a really stupid way of saying “we do not have enough masks”.
To develop a rough understanding where we are in the overall process, we need to think about how many people have already been infected. Early figures (and I have not seen an update on this for a while) estimated that 60% to 70% of the overall population will eventually contract the virus. Here in Germany, if we assume that six to ten times more people are infected than have been tested positive (currently around 125,000 of 83 million), that means less than 2% of the population have caught Corona so far. In this light it is safe to assume that we still have some way to go.
As to what all this means for the next couple of weeks, my guess is that as soon as enough face masks are available some restrictions will be lifted. The focus will likely be a combination of things to boost morale and help the economy. What will probably stay in place are recommendations to still limit contacts as far as possible. So for all businesses, where this mode of operation is possible, that will mean working from home for a long time to come.