Hello,
I have following configuration
2 networks with separate physical adapters, each network connected via different physical switch: 192.168.0.0/24 and 192.168.1.0/24
- 5 cluster nodes with W2k8 R2 and latest updates (192.168.0.32 to 192.168.0.36 and 192.168.1.32 to 192.168.1.36)
- 1 domain controller W2k8 R2 and latest updates (192.168.0.37)
- 1 shared multipath iSCSI storage for CSV (192.168.0.38, 192.168.1.38)
- cluster virtual IP (192.168.0.39)
- quorum is node majority
Currently, I have firewall configured to allow TCP, UDP and ICMP from "Local subnet" - this way, everything works OK
But as there are other systems within these subnets (non-domain hosts), I'd like to narrow firewall rules to allow only domain members to fully communicate. I am using GP to set up firewall rules.
So I tried to set up "Local Domain Policy" FW rules to use IP ranges instead of whole "Local subnet".
I entered both ranges 192.168.0.32-192.168.0.39 and 192.168.1.32-192.168.1.39 for full accept of TCP, UDP and ICMP (3 rules total). Instantly as I removed "Local subnet" and applied GP with only these IP ranges, Cluster service reported to be
unable to communicate with other nodes and went down on all 5 nodes, refusing to start again. Reapplying GP with "Local subnet" made the cluster to come up again.
I tried TCP communication to various listening ports between nodes after I removed Local subnet and everything looked OK. Also DNS resolving was correct.
What should I have missed there?