PVE02.D5E.dev

PVE02.D5E.dev

Proxmox Virtual Environment (PVE) setup

owc 10g ethernet debian

https://forum.proxmox.com/threads/minor-change-thunderbolt-networking.148656/ https://eshop.macsales.com/Service/Knowledgebase/Article/43/856/OWC-Thunderbolt-3-Pro-Dock-Driver

AQN107 (10GbE) and AQN108 (5GbE) based adapters should work fine with any recent Linux kernel. A Thunderbolt 10GbE adapter will need to contain both a Thunderbolt chip (such as Intel JHL8440) and an Ethernet controller chip such as AQN107, because nobody makes a single chip that natively speaks both Thunderbolt and 10GbE.

deeper research has yielded that there should be (1) Intel Titan Ridge JHL8540 and JHL8340, and (2) Broadcom BCM8487X out there as well.

Best practices

Separate Network Interfaces** For VMs and Containers: Dedicate one network interface to handle all the traffic for your virtual machines (VMs) and containers. This separation ensures that VMs can communicate efficiently without interfering with the host system’s operations.

VLAN For Proxmox Cluster Communication: Utilize a separate network interface for Proxmox cluster communication. This is crucial for cluster data synchronization, heartbeat signals, and migration processes. A dedicated interface for cluster communication improves the overall stability and performance of the cluster.

What about web interface?

cloud-init

3. VLAN Configuration Quality of Service (QoS) Use Virtual LANs (VLANs) to segment network traffic and enhance security. VLANs allow you to isolate network traffic for different groups of VMs or services, reducing the risk of internal threats and improving network management.** Implement Quality of Service (QoS) rules to prioritize traffic and ensure that critical services get the bandwidth they need. This is especially important in environments where network resources are heavily utilized. 5. Firewall and Security Utilize Proxmox’s built-in firewall to protect your network. Configure firewall rules to control incoming and outgoing traffic for both the host and VMs. Ensure that only necessary ports are open and accessible from the network.§ By following these best practices, you can create a robust and efficient networking environment for your Proxmox VE infrastructure, ensuring that your VMs and cluster operations run smoothly and securely.

https://pve.proxmox.com/wiki/Cloud-Init_Support

References

apt-get install net-tools netstat -rn

pvenode acme account register account-name bh@dicaire.com pvenode config set –acme domains=d5e.dev

I made a couple blog posts/videos on doing some proxmox automation. Here’s standard proxmox automation: https://gregsowell.com/?p=7677 Here’s migrating VMs from VMware to Proxmox: https://gregsowell.com/?p=7690

Ansible

I initially came across the community.general.proxmox module. However, from what I can gather, this module is for managing LXC containers on a Proxmox host, NOT for managing VMs. Be wary of this, as this was returned frequently as a top result in many of my google searches.

robert-sandor/ansible-proxmox: Ansible playbook to setup a proxmox pve server https://gist.github.com/yvesh/ae77a68414484c8c79da03c4a4f6fd55

Mac Mini Intel setup

proxmox macMini

https://forum.proxmox.com/threads/installing-proxmox-ve-7-2-on-a-mac-mini-2018.114218/

update: I had no issues with the normal proxmox 7.4 iso! Here’s all I did: 1 Follow pre-install steps from https://wiki.t2linux.org/guides/preinstall/ 2 When holding option, choose “EFI Boot” rather than the grub disk 3 Install Proxmox normally (be patient on boot, T2 just seems to slow it down a bit) so far, everything seems to work! usb A and B and Ethernet, havent tried wifi or bluetooth yet, nor do i really plan to.

https://t2linux.org/guides/preinstall/ https://pve.proxmox.com/pve-docs/chapter-pve-installation.html#_instructions_for_macos https://github.com/AdityaGarg8/pve-edge-kernel-t2 https://forum.proxmox.com/threads/very-poor-performance-vs-esxi-at-first-glance.130288/ https://forum.proxmox.com/threads/very-poor-performance-vs-esxi-at-first-glance.130288/

The 2018 Mac mini can now run VMware ESXi 8.01 internally flawlessly, thanks to native 1Gbe / 10Gbe Ethernet + Thunderbolt 3 support from VMware since ESXi version 7.03, as well as a community fling NVMe driver for the built-in T2 chip protected storage! More info and driver links here: https://www.williamlam.com/2021/02/apple-nvme-driver-for-esxi-using-new-community-nvme-driver-for-esxi-fling.html All that is missing is exposing the environment sensors for temperature and fans to ESXi…. Enjoy! https://williamlam.com/2021/02/apple-nvme-driver-for-esxi-using-new-community-nvme-driver-for-esxi-fling.html