The Case of Multicast Message Loss on Windows Server 2012 R2

I have worked quite a bit with applications using UDP/multicast messaging recently. And I’ve run into a few issues along the way, such as multicast messages not being received on a Windows Fail-over Cluster.

So by now I have a solid checklist of things to configure on our servers and ensure in our applications that consume multicast messages to make sure everything runs smoothly and message loss is kept at acceptable levels. Yet, on our latest Windows Server 2012 R2 machines I had applications experiencing serious datagram loss as the amount of network traffic (in general, not just multicast) on the machine increased increased.

I researched the problem online and got the tips you’d expect: get latest NIC drivers, increase NIC receive buffer sizes, turn on offloads, turn on receive-side scaling, fine-tune receive-side scaling, increase socket buffer sizes etc. Of course, I had already tried all those things, and none of them had worked.

Solution: Exempting multicast traffic from Base Filtering Engine

Eventually I found this support document: Datagram loss when you run a multicast receiver application in Windows 8 and in Windows Server 2012. The problem description matched perfectly with what I was seeing on our server. Unfortunately, the document describes an issue in Windows Server 2012 and the hotfix available there cannot be installed on Windows Server 2012 R2. Fortunately, it doesn’t have to be. You can just set the registry key and the Base Filtering Engine supports it out of the box.

New-ItemProperty HKLM:\System\CurrentControlSet\services\Tcpip\Parameters\ -Name UdpExemptPortRange -Value "XXXX-YYYY" -PropertyType MultiString -Force

I haven’t found any official documentation on this, and prior to this post, there were just four results when searching Google for UdpExemptPortRange. But it works as far as I can tell.