Running Hyper-V in a QEMU/KVM Guest

This article provides a how-to on setting up nested virtualization, in particular running Microsoft Hyper-V as a guest of QEMU/KVM. The usual terminology is going to be used in the text: L0 is the bare-metal host running Linux with KVM and QEMU. L1 is L0’s guest, running Microsoft Windows Server 2016 with the Hyper-V role enabled. And L2 is L1’s guest, a virtual machine running Linux, Windows, or anything else. Only Intel hardware is considered here. It is possible that the same can be achieved with AMD’s hardware virtualization support but it has not been tested yet.

Update 4/2017: AMD is broken, fix is coming.
Update 7/2017: Check out Nesting Hyper-V in QEMU/KVM: Known issues for a list of known issues and their status

A quick note on performance. Since the Intel VMX technology does not directly support nested virtualization in hardware, what L1 perceives as hardware-accelerated virtualization is in fact software emulation of VMX by L0. Thus, workloads will inevitably run slower in L2 compared to L1.

Kernel / KVM

A fairly recent kernel is required for Hyper-V on QEMU/KVM to function properly. The first commit known to work is 1dc35da, available in Linux 4.10 and newer.

Nested Intel virtualization must be enabled. If the following command does not return “Y”, kvm-intel.nested=1 must be passed to the kernel as a parameter.

$ cat /sys/module/kvm_intel/parameters/nested

Update 4/2017: On newer Intel CPUs with PML (Page Modification Logging) support such as Kaby Lake, Skylake, and some server Broadwell chips, PML needs to be disabled by passing kvm-intel.pml=0 to the kernel as a parameter. Fix is coming.


QEMU 2.7 should be enough to make nested virtualization work. As always, it is advisable to use the latest stable version available. SeaBIOS version 1.10 or later is required.

The QEMU command line must include the +vmx cpu feature, for example:

-cpu SandyBridge,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx

If QEMU warns about the vmx feature not being available on the host, nested virt has likely not been enabled in KVM (see the previous paragraph).


Once the Windows L1 guest is installed, add the Hyper-V role as usual. Only Windows Server 2016 is known to support nested virtualization at the moment.


If Windows complains about missing HW virtualization support, re-check QEMU and SeaBIOS versions. If the Hyper-V role is already installed and nested virt is misconfigured or not supported, the error shown by Windows tends to mention “Hyper-V components not running” like in the following screenshot.


If everything goes well, both Gen 1 and Gen 2 Hyper-V virtual machines can be created and started. Here’s a screenshot of Windows XP 64-bit running as a guest in Windows Server 2016, which itself is a guest in QEMU/KVM.



9 thoughts on “Running Hyper-V in a QEMU/KVM Guest

      1. I’m using proxmox (Kernel 4.10.11-1-pve & pve-qemu-kvm 2.9.0-1) , installed Win Server 2012 Standard, but when i try to create a vswitch it gives a error “The operation on computer ‘localhost’ failed”, tried with e1000 and virtio driver but the result is tha same , googled everywhere but didn’t find any solution.


      2. This seems to work for me, on both 2012 and 2012 R2 with the Fedora 4.10.11-200 kernel and latest upstream QEMU. Can you give me your QEMU command line and the exact steps to follow on a fresh-installed 2012 to get the error you mentioned?


  1. I’ve set up Server 2016 Hyper-V in KVM successfully on my setup (Haswell, have tried kernels 4.10 and 4.11), but it looks like networking breaks after installing the role. I did not configure a virtual Hyper-V switch on the e1000 adapter, though I have tried to do so before. I noticed that you have the yellow warning icon as well, do you have the same issue?


    1. I got networking working by using the virtio adapter. Strange, though; the e1000 adapter apparently repeatedly disconnects and reconnects itself in the previous case.


      1. That’s a good workaround. e1000e should also work if you don’t want to use virtio.

        Another thing to try is start the VM with kernel-irqchip=split.

        I’ll update the article with a list of known issues and limitations soon. Thanks!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s