Contents

Building a Home Virtualization Server With Proxmox

Running a dedicated virtualization server at home is a practical way to centralize always-on workloads like self-hosted services, infrastructure tooling, or test environments. In this post, we’ll walk through one possible setup using Proxmox VE as the hypervisor, Ansible for configuration management, and Packer to create reusable virtual machine templates.

The configuration is tailored for a single-node homelab using a compact mini-PC, but the principles can be adapted to larger or different environments. All playbooks, templates, and configuration files used in this guide are available in this GitHub repository for reference and reuse.


What is Proxmox?

Proxmox Virtual Environment (VE) is an open-source Type 1 hypervisor that runs directly on host hardware. It integrates KVM for full virtualization and LXC for containers, and includes a web UI, REST API, and features like clustering, backup scheduling, and storage integration.

Compared to Type 2 hypervisors (e.g., VirtualBox, VMware Workstation), Proxmox offers lower overhead and better performance since it doesn’t rely on a host operating system.


Hardware Overview

For this setup, we’re using a ACEMAGICIAN Mini PC with the following specifications:

  • AMD Ryzen 7 5700U (8 cores / 16 threads)
  • 16 GB DDR4 RAM
  • 512 GB NVMe SSD
  • Dual 1 GbE ports
  • Wi-Fi 6 and Bluetooth 5.2

This type of machine fits a common profile for homelab hosts—small, relatively quiet, and with sufficient compute and memory resources to run several VMs. Alternatives include repurposed desktops, NUCs, or rack-mounted servers depending on space, power, and budget constraints.


Prepare Ansible Resources

Ansible is used to automate the configuration of the Proxmox node. This includes creating the necessary user accounts, applying SSH keys, provisioning certificates, setting up networking, and integrating storage.

Creating a Vault File

Sensitive credentials and certificate data are stored securely using an Ansible Vault file. To create the vault:

1
2
cd ansible-inventory/group_vars
ansible-vault create all.yaml

A minimal example of what this file should include is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
---
vault_proxmox_servers_bootstrap_user: root
vault_proxmox_servers_bootstrap_pass: 'password'

vault_proxmox1_ssl_key: |
  -----BEGIN RSA PRIVATE KEY-----
  ...
  -----END RSA PRIVATE KEY-----

vault_proxmox1_ssl_cert: |
  -----BEGIN CERTIFICATE-----
  ...
  -----END CERTIFICATE-----

These values are referenced in host variables and playbooks to provide authentication and SSL configuration per-node.

Inventory Configuration

The inventory file is written in YAML format (hosts.yaml) and describes both hosts and group-level configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
all:
  vars:
    i_ansible_public_key_path: ".ssh/ansible/id_rsa.pub"

proxmox_servers:
  hosts:
    pve1.lab:
      ansible_host: "IP ADDRESS"
      i_ssl_key: "{{ vault_proxmox1_ssl_key }}"
      i_ssl_cert: "{{ vault_proxmox1_ssl_cert }}"

  vars:
    i_default_gateway: "IP ADDRESS"
    i_network_mask: "NETWORK MASK" # Example: /24
    i_bootstrap_user: "{{ vault_proxmox_servers_bootstrap_user }}"
    i_bootstrap_pass: "{{ vault_proxmox_servers_bootstrap_pass }}"
    i_nas_ip: "IP ADDRESS"

This inventory layout separates group-wide configuration (e.g. default gateway, NAS IP) from node-specific values like SSL keys.

Ansible Configuration File

The ansible.cfg file defines project-wide Ansible behavior:

1
2
3
4
5
6
[defaults]
remote_user = ansible
host_key_checking = false
private_key_file = ~/.ssh/ansible/id_rsa
inventory = ansible-inventory
roles_path = ansible-roles/

This configuration does the following:

  • Sets the default SSH user to ansible, which must exist on all target nodes.
  • Disables host key checking to avoid interactive prompts on first-time connections (note: only advisable in trusted environments).
  • Points to the SSH private key specific to the Ansible user.
  • Sets the inventory path to ansible-inventory, matching the YAML layout above.
  • Specifies a custom path for Ansible roles (if used).

Bootstrapping the Node

After Proxmox is installed on the mini-PC and a static IP is assigned (via your DHCP server or statically configured), the next step is to bootstrap the system to allow Ansible-based management. This includes creating a non-root user for Ansible, setting up SSH access, and applying basic system updates.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
- name: Bootstrap proxmox server
  hosts: proxmox_servers
  remote_user: "{{ i_bootstrap_user }}"
  become: true
  vars:
    ansible_ssh_pass: "{{ i_bootstrap_pass }}"
    ansible_become_pass: "{{ i_bootstrap_pass }}"
  tasks:
    - name: Ensure ansible user is created
      ansible.builtin.user:
        name: ansible
        password_lock: true
        state: present

    - name: Ensure ssh key is configured for ansible user
      ansible.posix.authorized_key:
        user: ansible
        state: present
        exclusive: true
        key: "{{ lookup('file', lookup('env', 'HOME') + '/' + i_ansible_public_key_path ) }}"

    - name: Ensure ansible user has sudo privileges
      ansible.builtin.lineinfile:
        path: /etc/sudoers.d/ansible
        line: "ansible ALL = (ALL) NOPASSWD: ALL"
        state: present
        create: true
        owner: root
        group: root
        mode: '0440'

    - name: Ensure enterprise sources list are commented out
      ansible.builtin.replace:
        path: "{{ item }}"
        regexp: '^([^#].*)'
        replace: '#\1'
      loop:
        - /etc/apt/sources.list.d/pve-enterprise.list
        - /etc/apt/sources.list.d/ceph.list

    - name: Ensure packages are updated
      ansible.builtin.apt:
        upgrade: dist
        update_cache: true
        autoclean: true
        autoremove: true
        cache_valid_time: 3600
        force_apt_get: true

    - name: Ensure sudo package is installed
      ansible.builtin.apt:
        force_apt_get: true
        pkg: sudo
        state: present
        update_cache: true

    - name: Reboot the system
      ansible.builtin.reboot:

To execute the playbook:

1
ansible-playbook --ask-vault-pass proxmox-bootstrap.yaml

This one-time setup ensures that your Proxmox node is accessible to Ansible using key-based auth and prepares it for the rest of the automated configuration steps.


Configuring Network Bonding and VLANs

The ACEMAGICIAN mini-PC used in this setup includes two Ethernet interfaces. To maximize available bandwidth and provide basic failover, we configure these into a single bonded interface using 802.3ad (LACP). This requires a managed switch with LACP support. On top of this bond, we create a VLAN-aware bridge that can be used for routing VM and container traffic.

This setup is not mandatory—many users run Proxmox on a single interface—but if your hardware and switch support link aggregation, it provides a good mix of throughput and redundancy.

Here’s the Jinja2 interface configuration template used by Ansible. Adapt interface names and VLAN settings to match your environment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto enp4s0
iface enp4s0 inet manual

auto bond0
iface bond0 inet manual
  bond-slaves eno1 enp4s0
  bond-miimon 100
  bond-mode 802.3ad
  bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
  address {{ ansible_host }}{{ i_network_mask }}
  gateway {{ i_default_gateway }}
  bridge-ports bond0
  bridge-stp off
  bridge-fd 0
  bridge-vlan-aware yes
  bridge-vids 2-4094

The vmbr0 bridge is used by Proxmox to route traffic to VMs and containers. It supports VLANs across a wide range (2–4094), which can be scoped further depending on your needs.


Configuring SSL Certificates for the Web Interface

Proxmox VE uses a self-signed certificate by default for its web interface. To avoid browser warnings and enable secure access, it’s possible to install your own certificates—either self-signed or issued by an internal certificate authority.

In this example, we assume certificates are pre-generated and stored securely (e.g., in an Ansible Vault or external secret manager). How to generate or issue self-signed certificates is beyond the scope of this post, but any PEM-encoded key and certificate pair can be used.

Below is a full Ansible task file that ensures the necessary files exist, deploys the key and certificate, and restarts the Proxmox web proxy service to apply the changes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
- name: Ensure SSL certificates for Proxmox web UI are configured
  hosts: proxmox_servers
  become: true
  vars:
    ssl_key: "{{ i_ssl_key }}"
    ssl_cert: "{{ i_ssl_cert }}"

  tasks:
    - name: Ensure files are created with correct permissions
      ansible.builtin.file:
        path: "{{ item }}"
        state: touch
        modification_time: preserve
        access_time: preserve
      loop:
        - /etc/pve/local/pveproxy-ssl.pem
        - /etc/pve/local/pveproxy-ssl.key

    - name: Ensure certificate is present
      ansible.builtin.copy:
        dest: "/etc/pve/local/pveproxy-ssl.pem"
        content: "{{ ssl_cert }}"
      notify: Restart pveproxy

    - name: Ensure private key is present
      ansible.builtin.copy:
        dest: "/etc/pve/local/pveproxy-ssl.key"
        content: "{{ ssl_key }}"
      notify: Restart pveproxy

This task assumes the certificate and key are stored as host variables (i_ssl_cert, i_ssl_key) and mapped from the vault, as shown earlier in the inventory.

After applying this playbook, Proxmox will serve HTTPS using your custom certificate pair. Make sure the certificate is trusted by your browser or operating system if it’s self-signed.


NFS Storage Integration With Synology NAS

To offload VM backups, ISO images, and templates from local disk, we configure an external NFS share from a Synology NAS. This helps centralize storage and makes it easier to manage and retain backups across hardware resets or upgrades.

After enabling NFS on the Synology (via Control Panel → File Services), create a shared folder (e.g., proxmox) and assign NFS permissions for your Proxmox node’s IP.

On the Proxmox side, storage can be defined using the following Ansible playbook. This task ensures the storage.cfg file includes the NFS mount, and signals Proxmox to reload its storage configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
- name: Ensure external nfs share is configured
  ansible.builtin.blockinfile:
    path: /etc/pve/storage.cfg
    append_newline: true
    prepend_newline: true
    block: |
      nfs: backups
              export /volume1/proxmox
              path /mnt/pve/backups
              server {{ nas_ip }}
              content iso,backup,images
              prune-backups keep-daily=7,keep-weekly=4
  notify: Restart proxmox

This configuration defines an NFS storage volume named backups that can hold ISO files, VM images, and backup archives. The prune-backups line manages retention, keeping 7 daily and 4 weekly backups.

You can adjust export, path, content, and prune-backups settings to suit your specific storage layout and retention policy. Be sure the NAS IP address is available to Proxmox and correctly mapped via the nas_ip variable in your Ansible inventory.


Creating VM Templates with Packer and Cloud-Init

To simplify and standardize VM deployments, we use Packer to build reusable Proxmox VM templates. These templates are configured to use Cloud-Init, which enables injecting SSH keys, user data, and network configuration at first boot. This approach allows for fast, consistent provisioning of new VMs.

This method requires that your workstation has Packer installed and that the Proxmox node is reachable from the workstation. You also need to open TCP ports 8000–9000 to support Packer’s temporary HTTP server used during VM provisioning. Additionally, the ISO file for the operating system must be downloaded manually and made available at the configured location in your Proxmox storage.

Proxmox API Token Setup

Packer connects to Proxmox using an API token. To create one:

  1. In the Proxmox web UI, go to Datacenter → Permissions → API Tokens.
  2. Click Add, select the appropriate user, and deselect “Privilege Separation”.
  3. Save the token ID and secret. These will be used in the credentials file.

Directory Layout

The Packer configuration is organized as follows:

1
2
3
4
5
6
7
packer/
├── credentials.template.pkrvars.hcl         # Proxmox API token and endpoint
├── credentials.pkrvars.hcl                  # User-defined credentials (not checked in)
├── ubuntu-server-focal.pkrvars.hcl          # VM parameters (iso storage, vlan, ssh key)
├── ubuntu-server-focal/
   ├── values.auto.pkrvars.hcl              # VM parameters (disk size, name, cloud image)
   └── http/user-data                       # Cloud-Init user-data template

Building a Template

To build an Ubuntu 20.04 Cloud-Init image for Proxmox:

1
2
3
4
# create credentials file
cp packer/credentials.template.pkrvars.hcl packer/credentials.pkrvars.hcl
chmod 600 packer/credentials.pkrvars.hcl
vim packer/credentials.pkrvars.hcl

Update the file with your Proxmox API URL, token ID, token secret, target node, and storage name.

Then configure the Ubuntu template variables:

1
vim packer/ubuntu-server-focal.pkrvars.hcl

Update the SSH key used for Cloud-Init in:

1
vim packer/ubuntu-server-focal/http/user-data

Validate and build the image:

1
2
make pkr_validate
make pkr_build

After a successful run, a new Ubuntu template will be created on your Proxmox node. This template can be cloned for new VMs, automatically configured at first boot using Cloud-Init.


Conclusion

This blog post demonstrated one possible approach to setting up a home virtualization server using Proxmox VE, Ansible, and Packer. We started from bare-metal installation, configured networking and secure access, integrated external storage, and automated the creation of reusable virtual machine templates.

The tools and practices outlined here represent a solid foundation, but are by no means the only way to achieve a reliable virtualization stack at home. Depending on your requirements, you might explore clustering, high availability, monitoring integration, or more advanced image pipelines.

As always, adapt the tools and processes to your environment, and iterate based on real-world feedback and usage.

Happy engineering!