I had recently installed ecoDMS 18.09 on a Debian 10.5 VM and it was a pleasant experience overall. However, the following things had to be done differently compared to the installation manual
sudo apt-get install gnupg (this seems to be installed out-of-the-box on Ubuntu)
- Do not install any Java environment but let this be handled by the normal dependency management
The system is currently in light use (still in testing) for my newly founded company and runs quite well. The VM is hosted on ESXi 6 that runs on a Celeron 3900 (yes, two cores) and for a single user with just a few documents stored the performance is really nice.
I so far intend to stay with that system and will keep you updated.
For setting up VLANs in my home network I basically followed the tutorial from Crosstalk Solution on YouTube:
The result, however, was not working as expected. As it turned out the critical difference was that I have my pfSense firewall running virtualized on VMware ESXi 6.5. By default the latter “removes” VLAN tags.
But it is very easy to change this and not even a reboot is required. So here are the steps you need to perform:
- Log in to the admin web UI of ESXi
- Go to the network port groups and open your internal network
- Open settings for internal network
- Change VLAN to 4095
- Save change
That’s all 🙂
While playing around with an ESXi 6.5 test system, I accidentally killed all network connectivity by setting the NICs to pass-through. This post gives a bit of background and the solution that worked for me.
The system is home-built with a Fujitsu D3410-B2 motherboard and an Intel dual-port Gigabit NIC (HP OEM). The motherboard has a Realtek RTL8111G chip for its NIC, which does allegedly work with community drivers, but not out-of-the-box. One of the things I want to run on this box is a pfSense router. So, when I discovered, that the Realtek NIC was available for pass-through, I enabled this. I also enabled one(!) of the two ports of my Intel dual-port NIC. At least, that is what I had intended to do.
Because what really happened was that all three NICs were set to pass-through, which of course meant that ESXi itself had no NIC available to itself any more. This issue showed after the next reboot, when the console told me that no supported NICs had been found in the system. Perhaps not wrong in strict terms, but certainly a bit misleading, when you are not very experienced with ESXi.
Searching the net did not provide a real answer. But after a couple of minutes I realized that perhaps my change about pass-through might be the culprit. The relevant file where these settings are stored is
/etc/vmware/esx.conf. I searched for lines looking like this
/device/000:01.0/owner = "passthru"
and replaced them with
/device/000:01.0/owner = "vmkernel"
After that I just had to reboot and things were fine again.
Having upgraded to OS X Mavericks just recently, one of the main questions for me was, whether I would need to upgrade from VMware Fusion 4 to the current version (which is Fusion 7). Initially the thought had not really occurred to me, but after the second crash of my MacBook I started to look around. The majority of content I found was about the support status and some people had gotten really agitated during the discussion. But little was said about the technical side of things, i.e. whether the stuff was working or not.
What I can say today is that for me VMware Fusion 4.1.4 works nicely and without issues so far. I use it almost exclusively to run a company-provided Java development environment on Windows 7 64 bit. So no 3D graphics, USB drives, or other “fancy” things.
The crashes were most like caused by an old version of the keyboard remapping tool Karabiner (formerly KeyRemap4MacBook). Once I had upgraded to the appropriate version, the crashes were gone.
I have recently tried to install the 64 bit version of CentOS 5.4 on ESXi 4.1, but without much success. It stalls immediately after start (also in text mode) and all I got was a non-flashing underscore or dash. Several people reported similar problems. The general recommendation is to enable virtualisation support (VT) in the ESXi host’s mainboard BIOS. However, I had already installed other 64 bit OSes and therefore doubted this would help. Instead I simply tried CentOS 5.5 x86-64 and this immediately solved the issue.