Home Lab Guides: Proxmox 6 — PCI(e) Passthrough with NVIDIA

https://upload.wikimedia.org/wikipedia/sco/thumb/2/21/Nvidia_logo.svg/1280px-Nvidia_logo.svg.png

This guide will take you through the entire process of preparing Proxmox 6.3 for PCI(e) passthrough and creating a Windows 10 VM with an NVIDIA GTX970 graphics card passed through.

If you’re new to Proxmox, check out Home Lab Guides: Proxmox 6 — Basic Setup and Installation for a guide to installing Proxmox 6.3.

NOTE: This guide may work with other graphics cards, but I haven’t tested to be sure.

Home Lab Guides: Proxmox 6

Part 1: Basic Setup and Installation
Part 2: PCI(e) Passthrough with NVIDIA
Part 3: IPFire Firewall
Part 4: Pi-hole
Part 5: Network and System Monitoring with Grafana, InfluxDB and Telegraf
Part 6: Automation with Ansible

Guide Host System Specs

Processor: i7-4820K
Motherboard: P9X79 LE
Memory: 24GB
OS HDD: 500GB SSD
ISO HDD: 500GB HDD
Video Card: GTX 970

Table of Contents

1.0 Preparation
1.1 Hardware Installation
1.2 Hardware Virtualization
1.3 Boot Priority
2.0 Host Setup
2.1 GRUB
2.2 Kernel Modules
3.0 Windows 10 VM
3.1 Prepare the VM
3.1.1 Create the VM
3.1.2 Modify the Processor
3.1.3 Connect the Graphics Card
3.2 OS Installation
3.3 NVIDIA Drivers
3.4 Remote Desktop
3.5 Post Installation
4.0 Author's Notes

1.0 Preparation

1.1 Hardware Installation

Make sure you’ve installed your NVIDIA graphics card before continuing to save time later.

1.2 Hardware Virtualization

The UEFI menus can be different, so, to enable the VT-d/AMD-d CPU flags, look for Enable Virtualization Technology and set to Enabled.

It’s possible that it’s listed under a different name, so be sure to consult the motherboard manual as well as ensure that your system can handle virtualization if you can’t find it.

1.3 Boot Priority

Boot into the UEFI menu then navigate to the boot options/priority list and set your UEFI Proxmox drive as #1 to ensure the system boots properly every time.

NOTE: You may need to disable CSM to ensure the system boots using UEFI.

2.0 Host Setup

2.1 GRUB

SSH or console into your host and run nano /etc/default/grub to start editing your GRUB config file.

Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet"

Replace with the one for your system:

Intel: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
AMD: GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

Save and exit then run update-grub2 to update the UEFI boot entries.

2.2 Kernel Modules

Run nano /etc/modules and add the following lines:

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Save and exit then run update-initramfs to add the kernel modules to the boot list.

3.0 Windows 10 VM

If you don’t know how to create a new virtual machine, check out part 1–4.2 Create a Virtual Machine of the Home Lab Guide series.

3.1 Prepare the VM

The next few steps need to be completed BEFORE you start the VM.

3.1.1 Create the VM

Create a Windows 10 VM and click Advanced in the bottom left then use the following settings:

NOTE: If not mentioned, leave the defaults or fill in as required, i.e. VM name.

OS
Guest OS:
Type: Microsoft Windows
SYSTEM
Qemu Agent: checked
BIOS: OVMF(UEFI)
Machine: q35 # Necessary for PCIe
CPU
Sockets: 1
Cores: 2
MEMORY
Memory: 8192 (8GB minimum)
NETWORK
No device: checked # Only if you intend to activate Win10 later

3.1.2 Modify the Processor

In the WebUI, click on your Win10 VM in the left column then click Hardware to display the current settings.

NOTE: You’ll need to know what model your processor is. Mine is an IvyBridge.

  1. Click on Processors, then Edit.
  2. Change the type to your processor’s model.

3. Click OK.

3.1.3 Connect the Graphics Card

While still in the Hardware section:

  1. Click Add and choose PCI Device.
  2. Click Advanced.
  3. Choose your graphics card from the Device list.

4. Click All functions and PCI Express. *Leave primary gpu unchecked*

5. Click OK.

3.2 OS Installation

  1. Click Start to get the VM running.
  2. Click on Console to open a console window.
  3. Press any key to boot from the CD then follow the steps in the Windows installer. You can leave the defaults or change as you like. It won’t impact the passthrough later on.

3.3 NVIDIA Drivers

Open a new console window if needed and log in to your VM.

  1. Browse to https://www.nvidia.com/Download/index.aspx to find your drivers. Don’t bother installing GeForce Experience right now.
  2. Install the NVIDIA drivers.
  3. Open Device Manager and take note of the yellow yield sign next to your video card. Your video card should have the correct name and if you go to properties you’ll see the device has stopped working due to a code 43.

3.4 Remote Desktop

  1. Open Control Panel.
  2. Navigate to System and Security.
  3. Click on Allow remote access.
  4. Click on Allow remote connections to this computer.

5. Click OK.

6. Reboot the VM from within Windows then power down the VM from within Windows after it reboots.

3.5 Post Installation

In the WebUI, click on your Win10 VM in the left column then click Hardware to display the current settings.

  1. Edit Display and set to VirtIO-GPU (virtio) to allow you to use either RDP or Console windows.
  2. Edit PCI Device and click Primary GPU to bypass the code 43.
  3. Start the VM.
  4. Open Device Manager to confirm the yield sign is gone from your graphics card. You may need to reboot twice from within the VM.

Your graphics card should now be successfully passed through to your Windows VM. Fire up Steam and take a break. You earned it!

4.0 Author’s Notes

  • Thanks for reading this article! If you have any questions, suggestions or comments, feel free to leave a comment.
  • I came across dozens of different guides offering all sorts of special args for the VM and other GRUB entries, but none of them worked. I finally applied the knowledge that NVIDIA doesn’t like running on VMs and that was the source of the problem. By setting the processor model to the actual model as opposed to host, it seems to bypass the code 43. It also seems to prevent the failure to RDP after setting the Primary GPU. It has the added benefit of allowing you to connect through Console as well as RDP.