Proxmox
Containers & Virtual Machines
Yes. Proxmox VE 9.1 can be installed directly on a single 4 TB NVMe using ZFS, and this is a common, supported configuration.
Below is the precise guidance you need to decide how to do it and what to watch for.
Supported ZFS installation modes (single NVMe)
During the installer, select ZFS (Advanced). With one disk, you’ll get:
1. ZFS Single Disk
-
Topology:
stripe(no redundancy) -
Pros
- ZFS features: snapshots, checksums, compression
- Simple, fast
-
Cons
- No redundancy (disk failure = restore from backup)
➡️ Recommended if you already have backups (e.g., PBS, restic, off-host).
2. ZFS RAID0 (single disk)
- Functionally identical to Single Disk
- No practical advantage over the default single-disk ZFS option
Installer settings you should explicitly tune
When you click Options in the ZFS installer screen:
ZFS pool settings (recommended)
| Setting | Value | Rationale |
|---|---|---|
| Ashift | 12 |
Correct for NVMe / 4K sectors |
| Compression | lz4 |
Near-zero CPU cost, real gains |
| Checksum | on |
Default, keep it |
| Copies | 1 |
>1 wastes space on single disk |
| ARC Max | Leave default | Tune later if needed |
Partitioning behavior (normal)
Proxmox will create:
- EFI System Partition
- ZFS boot pool (
rpool) - Swap on ZFS (zvol)
This is expected and correct.
NVMe + ZFS considerations (important)
Memory
-
ZFS benefits from RAM, but does not require “1 GB per TB”.
-
Practical minimums:
- 16 GB RAM: acceptable
- 32 GB RAM: comfortable
- 64 GB RAM: ideal if running many VMs
TRIM
Enable after install (recommended for NVMe):
zpool set autotrim=on rpoolBoot reliability
- ZFS-on-root is fully supported.
- GRUB + ZFS boot works reliably in Proxmox 9.x.
- Keep
/etc/kernel/proxmox-boot-uuidsintact.
What you should not do
- ❌ Do not mirror with partitions on the same NVMe
- ❌ Do not disable compression
- ❌ Do not oversize swap (default is fine)
- ❌ Do not expect redundancy from single-disk ZFS
When ZFS is not the right choice
Consider ext4 + LVM instead if:
- You have <16 GB RAM
- This is a disposable lab host
- You don’t care about snapshots or integrity
Otherwise, ZFS is objectively superior.
Bottom line
Yes — installing Proxmox VE 9.1 on a 4 TB NVMe with ZFS is fully supported and recommended. Use ZFS single disk + lz4 compression, enable autotrim, and rely on external backups.
If you want, I can:
- Propose an optimal dataset layout (
rpool/data, VM disks, containers) - Tune ARC / swappiness post-install
- Review this for homelab vs production use cases
In the Proxmox VE installer, DHCP selection is not automatic or obvious, but it is supported. The key point is what you leave empty.
How to use DHCP during Proxmox VE install (PVE 9.1)
Step-by-step (installer UI)
-
Proceed normally until Network Configuration
-
You will see fields such as:
- Management interface
- Hostname (FQDN)
- IP Address / CIDR
- Gateway
- DNS server
-
Do this to enable DHCP:
-
Select the correct NIC (e.g.
eno1,enp1s0) -
Enter the Hostname (FQDN) ← this is mandatory
-
Leave the following fields EMPTY:
- IP Address
- Gateway
- DNS
-
-
Continue the installation
➡️ The installer will configure the management interface to use DHCP.
What Proxmox does internally
After install, you’ll see something like this in /etc/network/interfaces:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eno1
bridge-stp off
bridge-fd 0Key line:
iface vmbr0 inet dhcpImportant caveats (do not skip)
1. Hostname must still resolve
-
Proxmox requires a valid FQDN
-
Example:
pve01.lab.local -
Your DHCP-provided DNS must resolve this hostname back to the IP (forward resolution at minimum)
If not, you’ll see:
- Web UI login issues
pveproxywarnings- Cluster problems later
2. DHCP is acceptable for:
- Lab / homelab
- Initial install
- Temporary staging
DHCP is not recommended long-term for:
- Production
- Clusters
- Corosync environments
Best practice is:
- DHCP reservation or
- Convert to static IP after install
Verify after install
Once logged in:
ip a show vmbr0Confirm:
- IP assigned
- Correct gateway
If you want to convert to static later
I can:
- Generate the exact
/etc/network/interfacesstanza - Validate DNS &
/etc/hosts - Make it cluster-safe (important if you plan HA)
Bottom line
To select DHCP during install:
✔️ Leave IP, Gateway, and DNS fields empty ✔️ Enter a valid hostname ✔️ Select the correct NIC
If you want, tell me:
- Single node or future cluster?
- Homelab or production?
- IPv4 only or dual-stack?
I’ll tailor the post-install network config accordingly.
Short answer: leave the PIN interface unused unless you explicitly need it.
Below is the precise reasoning and what not to do.
What the PIN network interface is
In the Proxmox installer, PIN = Proxmox Installer Network.
-
Used only during installation
-
Provides temporary connectivity for:
- Package access (if needed)
- Installer environment
-
Not persisted after reboot
Once installation completes, the system boots using:
vmbr0(management bridge)- Your selected physical NIC
What you should do in practice
During install
-
Do nothing
-
Do not assign:
- IP
- VLAN
- Bridge
-
Leave it as-is
After install
-
The PIN interface disappears
-
It is not visible in:
/etc/network/interfaces- Proxmox Web UI
ip a
No cleanup required.
Common mistakes to avoid
❌ Trying to:
- Assign a static IP to PIN
- Bridge PIN to
vmbr0 - Use PIN for management
- Troubleshoot connectivity via PIN after install
These will not work and can break assumptions during install.
When PIN would matter (rare)
Only relevant if:
- You are performing custom automated installs
- PXE or scripted installer environments
- Debugging installer-level networking
For a normal install (including DHCP + ZFS), ignore it completely.
Recommended post-install network layout (single NIC)
Physical NIC (eno1)
↓
vmbr0 (DHCP or static)
↓
Proxmox host + VMsClean, supported, and cluster-safe.
Bottom line
PIN is temporary installer plumbing. Leave it untouched and forget it exists.
If you want, next we can:
- Harden the network layout (mgmt vs VM traffic)
- Add VLAN-aware bridges
- Prepare the node for future clustering
Just say the word.