So, when you need (Or want, frankly) to have a number of VMs running to serve various usages you encounter, you’ll likely start looking into a hypervisor manager of some sort. There’s a few, like MS HyperV, ESX, etc. The one I’m going to be discussing is Proxmox VE.
A nice deal with Proxmox VE (Which I’ll just call PVE) is that it’s open source, and also, open core. Sorta. About the only thing they gatekeep is support (Duh) and their Enterprise repository for updates.
All in all, a good exchange I’d say. Costs for support isn’t really all that nuts either. They have a pretty solid pricing scheme even for one-off support incidents.
Otherwise, all the features are there for you to use: Clustering, root on zfs, container management, VM management, etc.
The hypervisor it uses is qemu/KVM which backs Amazon last I checked. Good enough for them, good enough for me :)
During the initial install, even if you pick DHCP for the interfaces, it will set itself up as a static IP assignment. So, keep that in mind. For me, I didn’t want ZFS root, just normal ext4 block devices, and for my needs here, I didn’t need any funky networking configurations, because it’s just a VM host for me, not trying to virtualize a router (Well, not THE router. My last post talks about my adventures in Wireguard.
So, I get to leave the default bridge just as is, as all of the VMs will get a DHCP address assigned. Once assigned, I’ll statically assign the address in my hardware router. Prevents any other hinkiness from cropping up having to mess with network configurations.
Install went pretty fast for both hosts. Clustering them was a simple “Turn it on here, now join the second one”. However, I only have two nodes, so at some point, I need to get a fencing node in place, or a third host (Which is likely going to be the case.
Nothing too nuts here, the