Building a Home Virtualization Server With Proxmox
Running a dedicated virtualization server at home is a practical way to centralize always-on workloads like self-hosted services, infrastructure tooling, or test environments. In this post, we’ll walk through one possible setup using Proxmox VE as the hypervisor, Ansible for configuration management, and Packer to create reusable virtual machine templates.
The configuration is tailored for a single-node homelab using a compact mini-PC, but the principles can be adapted to larger or different environments. All playbooks, templates, and configuration files used in this guide are available in this GitHub repository for reference and reuse.
What is Proxmox?
Proxmox Virtual Environment (VE) is an open-source Type 1 hypervisor that runs directly on host hardware. It integrates KVM for full virtualization and LXC for containers, and includes a web UI, REST API, and features like clustering, backup scheduling, and storage integration.
Compared to Type 2 hypervisors (e.g., VirtualBox, VMware Workstation), Proxmox offers lower overhead and better performance since it doesn’t rely on a host operating system.
Hardware Overview
For this setup, we’re using a ACEMAGICIAN Mini PC with the following specifications:
- AMD Ryzen 7 5700U (8 cores / 16 threads)
- 16 GB DDR4 RAM
- 512 GB NVMe SSD
- Dual 1 GbE ports
- Wi-Fi 6 and Bluetooth 5.2
This type of machine fits a common profile for homelab hosts—small, relatively quiet, and with sufficient compute and memory resources to run several VMs. Alternatives include repurposed desktops, NUCs, or rack-mounted servers depending on space, power, and budget constraints.
Prepare Ansible Resources
Ansible is used to automate the configuration of the Proxmox node. This includes creating the necessary user accounts, applying SSH keys, provisioning certificates, setting up networking, and integrating storage.
Creating a Vault File
Sensitive credentials and certificate data are stored securely using an Ansible Vault file. To create the vault:
|
|
A minimal example of what this file should include is:
|
|
These values are referenced in host variables and playbooks to provide authentication and SSL configuration per-node.
Inventory Configuration
The inventory file is written in YAML format (hosts.yaml
) and describes both hosts and group-level configuration:
|
|
This inventory layout separates group-wide configuration (e.g. default gateway, NAS IP) from node-specific values like SSL keys.
Ansible Configuration File
The ansible.cfg
file defines project-wide Ansible behavior:
|
|
This configuration does the following:
- Sets the default SSH user to
ansible
, which must exist on all target nodes. - Disables host key checking to avoid interactive prompts on first-time connections (note: only advisable in trusted environments).
- Points to the SSH private key specific to the Ansible user.
- Sets the inventory path to
ansible-inventory
, matching the YAML layout above. - Specifies a custom path for Ansible roles (if used).
Bootstrapping the Node
After Proxmox is installed on the mini-PC and a static IP is assigned (via your DHCP server or statically configured), the next step is to bootstrap the system to allow Ansible-based management. This includes creating a non-root user for Ansible, setting up SSH access, and applying basic system updates.
|
|
To execute the playbook:
|
|
This one-time setup ensures that your Proxmox node is accessible to Ansible using key-based auth and prepares it for the rest of the automated configuration steps.
Configuring Network Bonding and VLANs
The ACEMAGICIAN mini-PC used in this setup includes two Ethernet interfaces. To maximize available bandwidth and provide basic failover, we configure these into a single bonded interface using 802.3ad (LACP). This requires a managed switch with LACP support. On top of this bond, we create a VLAN-aware bridge that can be used for routing VM and container traffic.
This setup is not mandatory—many users run Proxmox on a single interface—but if your hardware and switch support link aggregation, it provides a good mix of throughput and redundancy.
Here’s the Jinja2 interface configuration template used by Ansible. Adapt interface names and VLAN settings to match your environment.
|
|
The vmbr0
bridge is used by Proxmox to route traffic to VMs and containers. It supports VLANs across a wide range (2–4094), which can be scoped further depending on your needs.
Configuring SSL Certificates for the Web Interface
Proxmox VE uses a self-signed certificate by default for its web interface. To avoid browser warnings and enable secure access, it’s possible to install your own certificates—either self-signed or issued by an internal certificate authority.
In this example, we assume certificates are pre-generated and stored securely (e.g., in an Ansible Vault or external secret manager). How to generate or issue self-signed certificates is beyond the scope of this post, but any PEM-encoded key and certificate pair can be used.
Below is a full Ansible task file that ensures the necessary files exist, deploys the key and certificate, and restarts the Proxmox web proxy service to apply the changes.
|
|
This task assumes the certificate and key are stored as host variables (i_ssl_cert
, i_ssl_key
) and mapped from the vault, as shown earlier in the inventory.
After applying this playbook, Proxmox will serve HTTPS using your custom certificate pair. Make sure the certificate is trusted by your browser or operating system if it’s self-signed.
NFS Storage Integration With Synology NAS
To offload VM backups, ISO images, and templates from local disk, we configure an external NFS share from a Synology NAS. This helps centralize storage and makes it easier to manage and retain backups across hardware resets or upgrades.
After enabling NFS on the Synology (via Control Panel → File Services), create a shared folder (e.g., proxmox
) and assign NFS permissions for your Proxmox node’s IP.
On the Proxmox side, storage can be defined using the following Ansible playbook. This task ensures the storage.cfg
file includes the NFS mount, and signals Proxmox to reload its storage configuration.
|
|
This configuration defines an NFS storage volume named backups
that can hold ISO files, VM images, and backup archives. The prune-backups
line manages retention, keeping 7 daily and 4 weekly backups.
You can adjust export
, path
, content
, and prune-backups
settings to suit your specific storage layout and retention policy. Be sure the NAS IP address is available to Proxmox and correctly mapped via the nas_ip
variable in your Ansible inventory.
Creating VM Templates with Packer and Cloud-Init
To simplify and standardize VM deployments, we use Packer to build reusable Proxmox VM templates. These templates are configured to use Cloud-Init, which enables injecting SSH keys, user data, and network configuration at first boot. This approach allows for fast, consistent provisioning of new VMs.
This method requires that your workstation has Packer installed and that the Proxmox node is reachable from the workstation. You also need to open TCP ports 8000–9000 to support Packer’s temporary HTTP server used during VM provisioning. Additionally, the ISO file for the operating system must be downloaded manually and made available at the configured location in your Proxmox storage.
Proxmox API Token Setup
Packer connects to Proxmox using an API token. To create one:
- In the Proxmox web UI, go to Datacenter → Permissions → API Tokens.
- Click Add, select the appropriate user, and deselect “Privilege Separation”.
- Save the token ID and secret. These will be used in the credentials file.
Directory Layout
The Packer configuration is organized as follows:
|
|
Building a Template
To build an Ubuntu 20.04 Cloud-Init image for Proxmox:
|
|
Update the file with your Proxmox API URL, token ID, token secret, target node, and storage name.
Then configure the Ubuntu template variables:
|
|
Update the SSH key used for Cloud-Init in:
|
|
Validate and build the image:
|
|
After a successful run, a new Ubuntu template will be created on your Proxmox node. This template can be cloned for new VMs, automatically configured at first boot using Cloud-Init.
Conclusion
This blog post demonstrated one possible approach to setting up a home virtualization server using Proxmox VE, Ansible, and Packer. We started from bare-metal installation, configured networking and secure access, integrated external storage, and automated the creation of reusable virtual machine templates.
The tools and practices outlined here represent a solid foundation, but are by no means the only way to achieve a reliable virtualization stack at home. Depending on your requirements, you might explore clustering, high availability, monitoring integration, or more advanced image pipelines.
As always, adapt the tools and processes to your environment, and iterate based on real-world feedback and usage.
Happy engineering!