I'm trying to understand a fundamental concept of clustering - is it just the DATA that's replicated (or is SHARED more technically correct?) or is it also the APPLICATION that replicated (or shared)?
I have identical servers and I'm trying to plan for disaster recovery and I'm wondering if "clustering" might be the best way to plan for failure?
I have a mission critical SQL application that I'd like to keep online at all times. If I understand clustering, it's just the data that's replicated (shared) so if the software program that accesses the SQL database is on a server that becomes unavailable...you're screwed - it doesn't matter that the DATA is protected against failure, right?
Or am I wrong? - clustering will replicate (share) the application itself so if either server becomes unavailable...the end users wouldn't know the difference and can continue to work?
Yes? No?
(Finally, I'd like to design my Exchange for continuous availability as well - is clustering a good candidate for it as well?)
Ed