Possible Scenario: On Windows XP or Windows Server 2003 by default IGMPv3 is enabled. When you send a IGMP multicast packet after joining the multicast group using IP_ADD_SOURCE_MEMBERSHIP and setsockopt(), you see that IGMPv2 packet is going out of the machine, while you are expecting IGMPv3 to go out. What’s wrong?
Then you check the registry key below to see if you are actually on IGMPv3 or not—
IGMPVersion DWORD 4
Or as per the KB below in “Configure the registry” section.
If the value is 4 for IGMPVersion, then your machine is using IGMPv3, you code is correct because may be it is giving right result for some other customer or network. Now what is wrong?
The reason is, if any machine or router in the network is using lower version of IGMP than v3, then the complete network falls back to that version. So, your machine/interface will start using lower version of the IGMP. This is mentioned in ietf website.
IGMP Mixed Version Proxying , see slides 11, 12 and 13.
But I want to send only IGMPv3, is there a work around ?
Unfortunetly, there is no way to work it around.
Other scenario: My machine is configured for IGMPv3, but I am seeing the problem mentioned as above. However, if I disable and then enable the interface, I see some IGMPv3 packets going out, but after some time I can see only IGMPv2 packets; Why that?
When interface get enables and machine is configured to use IGMPv3, it starts sending IGMPv3 packets, however as soon as an IGMPv2 query is received on the interface, it falls back to IGMPv2. Later, it’s all sending out IGMPv2 packets.