Quantcast
Channel: High Availability (Clustering) forum
Viewing all 6672 articles
Browse latest View live

HyperV Cluster MAC Address static or dynamic

$
0
0

Hi,

i have a hyperv cluster with 9 Windows 2012R2 Nodes and round about 200 VMs with 2012R2 or 2008R2.
alle VMs where connect to 2 virtual switches. at the moment i have dynamic mac addresses.

what is the better way? should i change to static mac addresses?

best regards
Thomas 


Thomas Lauer


Cluster Service terminated by GUM Task

$
0
0

I've had an issue where one of my Windows 2012 R2 Hyper-V hosts just decided to keel over and die on me.  The event which I'm seeing is as follows:

Log Name:      System
Source:        Microsoft-Windows-FailoverClustering
Date:          20.07.2016 20:39:19
Event ID:      5377
Task Category: Global Update Mgr
Level:         Error
Keywords:
User:          SYSTEM
Computer:      mgmt45.mgmt.local
Description:
An internal Cluster service operation exceeded the defined threshold of '110' seconds. The Cluster service has been terminated to recover. Service Control Manager will restart the Cluster service and the node will rejoin the cluster.
Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"><System><Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" /><EventID>5377</EventID><Version>0</Version><Level>2</Level><Task>6</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime="2016-07-20T18:39:19.244464800Z" /><EventRecordID>347017</EventRecordID><Correlation /><Execution ProcessID="4596" ThreadID="9184" /><Channel>System</Channel><Computer>mgmt45.mgmt.local</Computer><Security UserID="S-1-5-18" /></System><EventData><Data Name="OperationName">SynchronizeState</Data><Data Name="ThresholdTimeInSec">110</Data></EventData></Event>
I'm finding extremely little information regarding event 5377 on the Internet.  Apart from doing the standard checking for latest windows updates, and rebooting - how can I prevent this from happening again in the future?  This crash took down 64 virtual machines.

Question about Cluster 3-Nodes

$
0
0

Hi All

We are creating a cluster for a customer so we can set up SQL Availability Groups. My scenario is this:

Site 1 = 2 nodes

site 2 = 1 Node but i will set this up for no votes because its A DR site and i don't want it to become primary unless i manually failover. 

Site 3 = Witness file share

My question is this.

1)Do i need the witness in my current configuration of 3 nodes if i put site 2 as no votes. 

2)Should site 2 only have 1 node or should i have 2 nodes both with no votes?

Thanks and any other recommendation or tips would be greatly appreciated

Enable-ClusterS2D hangs at "Waiting until all physical disks are reported by clustered storage subsystem"

$
0
0

The target cluster are running the final RTM MSDN bits of Nano server datacenter, with latest *.321.* windows update packages, and the member nodes have a mixture of NVMe and SATA storage. The -verbose output of enable-clusters2d shows the text at the bottom of this posting, up to the hang...

The log files on the cluster members show no useful information. The "cannotpoolreason" for the NVMe devices shows text similar to "waiting for verification", after the hang..

I've removed the cluster several times, cleaned the systems, and reinitiated the cluster create and enable s2d with same effect.

How may I further diagnose my problem?

---

VERBOSE: vms-c: 2016/10/16-16:25:56.110 Setting default fault domain awareness on clustered storage subsystem
VERBOSE: vms-c: 2016/10/16-16:25:56.751 Waiting until physical disks are claimed
VERBOSE: vms-c: 2016/10/16-16:25:59.767 Number of claimed disks on node 'VMS-1': 6/2
VERBOSE: vms-c: 2016/10/16-16:25:59.783 Number of claimed disks on node 'VMS-2': 6/2
VERBOSE: vms-c: 2016/10/16-16:25:59.798 Node 'VMS-1': Waiting until cache reaches desired state (HDD:'ReadWrite'
SSD:'WriteOnly')
VERBOSE: vms-c: 2016/10/16-16:25:59.798 SBL disks initialized in cache on node 'VMS-1': 6 (6 on all nodes)
VERBOSE: vms-c: 2016/10/16-16:25:59.814 SBL disks initialized in cache on node 'VMS-2': 6 (12 on all nodes)
VERBOSE: vms-c: 2016/10/16-16:25:59.814 Cache reached desired state on VMS-1
VERBOSE: vms-c: 2016/10/16-16:25:59.829 Node 'VMS-2': Waiting until cache reaches desired state (HDD:'ReadWrite'
SSD:'WriteOnly')
VERBOSE: vms-c: 2016/10/16-16:25:59.845 Cache reached desired state on VMS-2
VERBOSE: vms-c: 2016/10/16-16:25:59.845 Waiting until SBL disks are surfaced
VERBOSE: vms-c: 2016/10/16-16:26:03.267 Disks surfaced on node 'VMS-1': 12/12
VERBOSE: vms-c: 2016/10/16-16:26:03.298 Disks surfaced on node 'VMS-2': 12/12
VERBOSE: vms-c: 2016/10/16-16:26:06.945 Waiting until all physical disks are reported by clustered storage subsystem
VERBOSE: vms-c: 2016/10/16-16:26:10.188 Physical disks in clustered storage subsystem: 0


Storage Replica - Unable to create RG and Log Volume.

$
0
0

Hi, I currently testing on cluster-to-cluster Storage Replica and there is an error to create the replication. Below are the sreenshot of the error.



Additionally, Just want to check is there any detail guide to deploy Storage replica except technet??

also, is storage replica work with storage spaces direct? because i am using storage spaces direct to create volume not from SAN storage.

Thanks & Regards,

Azuki


"Unable to successfully cleanup" Error while configuring cluster in Windows 2012 R2 server.

$
0
0

I am configuring  a three node Windows 2012 R2 server Failover Cluster. The cluster validation completes without any error/warning. please help to fix this error

Cluster:
hcledu
Node:
clstr-1.hcltrg.com
Node:
clstr-3.hcltrg.com
Node:
clstr-2.hcltrg.com
IP Address:
192.168.10.200
Started
10/19/2016 4:10:38 PM
Completed
10/19/2016 4:14:16 PM
Beginning to configure the cluster hcledu.
Initializing Cluster hcledu.
Validating cluster state on node clstr-1.hcltrg.com.
Find a suitable domain controller for node clstr-1.hcltrg.com.
Searching the domain for computer object 'hcledu'.
Bind to domain controller \\HCLTRG.hcltrg.com.
Check whether the computer object hcledu for node clstr-1.hcltrg.com exists in the domain. Domain controller \\HCLTRG.hcltrg.com.
Computer object for node clstr-1.hcltrg.com exists in the domain.
Verifying computer object 'hcledu' in the domain.
Checking for account information for the computer object in the 'UserAccountControl' flag for CN=hcledu,CN=Computers,DC=hcltrg,DC=com.
Enable computer object hcledu on domain controller \\HCLTRG.hcltrg.com.
Configuring computer object 'hcledu in organizational unit CN=Computers,DC=hcltrg,DC=com' as cluster name object.
Get GUID of computer object with FQDN: CN=hcledu,CN=Computers,DC=hcltrg,DC=com
Validating installation of the Network FT Driver on node clstr-1.hcltrg.com.
Validating installation of the Cluster Disk Driver on node clstr-1.hcltrg.com.
Configuring Cluster Service on node clstr-1.hcltrg.com.
Validating installation of the Network FT Driver on node clstr-3.hcltrg.com.
Validating installation of the Cluster Disk Driver on node clstr-3.hcltrg.com.
Configuring Cluster Service on node clstr-3.hcltrg.com.
Validating installation of the Network FT Driver on node clstr-2.hcltrg.com.
Validating installation of the Cluster Disk Driver on node clstr-2.hcltrg.com.
Configuring Cluster Service on node clstr-2.hcltrg.com.
Waiting for notification that Cluster service on node clstr-1.hcltrg.com has started.
Forming cluster 'hcledu'.
Unable to successfully cleanup.
An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...
An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...
There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes.
There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes.
There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes.
An error occurred while creating the cluster.
An error occurred creating cluster 'hcledu'.

This operation returned because the timeout period expired
To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster.

Unable to set NodeAndFileShareMajority quorum setting

$
0
0

Set-ClusterQuorum : There was an error configuring the file share witness '\\server\SharedFolder

Unable to save property changes for 'File Share Witness'.
    The user name or password is incorrect
At line:1 char:1
+ Set-ClusterQuorum -NodeAndFileShareMajority "\\server\SharedFolder'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Set-ClusterQuorum], ClusterCmdletException
    + FullyQualifiedErrorId : Set-ClusterQuorum,Microsoft.FailoverClusters.PowerShell.SetClusterQuorumCommand

On this SharedFolder, I have given full permission to ClusterName and cluster nodes.

Let me know if you have any query.

Thanks for your help in advance!!!

disappearing iSCSI Targets

$
0
0

I'm new to clustering.

I have two identical computers.  Same box, same model, each with 2 NIC's, same hard drives, same memory etc . . . . .

I installed Win-Server 2012 R2 - Standard on each box, then setup identical iSCSI drives and Targets on each.  One 50 GB iSCSI drive for the witness, and one 500 GB drive for data - identical drives on each box.  These servers are NOT setup as virtual machines.

The iSCSI Initiator on each box finds its iSCSI drives, and those on the other box. 

Computer Management/Drive Management on each box sees all four drives on each box.

The Failover Cluster Validation passes every single test.  The "Create a Failover Cluster from the Tested Hardware" is checked and run, and the "Add all available storage" is checked.

After the Cluster is created, all iSCSI targets and their virtual drives are gone!!!  They don't show up anywhere - except within the iSCSI-Virtual Drives folder on the primordial disk.

Does anyone have suggestions???

Lew


Change IP Address of Cluster Only and Migration Network

$
0
0

Hello,

We are planning to move offices shortly and have a complete new stack of switches to configure.  I have been asked to come up with an IP addressing scheme so that we new start in the new offices.

I currently have a 2 node Windows 2012 R2 Failover Cluster that runs our Hyper-V farm. All the physical NIC's on both hosts are teamed and they both have 3 virtual adapters as follows:

  1. Cluster and Client Network (vHostSwitch) 192.168.192.0/24
  2. Live Migration Network (vEthernet - Migration) 192.168.253.0/24
  3. Cluster Only Network (vEthernet - Cluster) 192.168.254.0/24

Both the Live Migration and Cluster Only networks use a full class C range, yet only need 2 IP addresses.  In an ideal world (if I could start again...which I can) I'd have used variable length subnet masks for the migration and cluster network:

  1. Live Migration Live Migration Network (vEthernet - Migration) 192.168.253.0/29
  2. Cluster Only Network (vEthernet - Cluster) 192.168.253.8/29

My questions are:

  1. Can I change the IP Addresses is the Failover Cluster (and, obviously, the virtual adapters)?
  2. What would be the process and any pitfalls?

Thanks
Tony

Change Name of Networks and Disk

$
0
0

Hi

This is probably a stupid question, but...

In my 2 node Windows 2012 R2 Failover cluster, I'd like to change the name of the networks and disks to something more useful.  I'd like go do something like this:

Networks:

  1. Cluster Network 1 renamed Heartbeat Network
  2. Cluster Network 2 renamed Migration Network
  3. Cluster Network 3 renamed Client Network

Disks:

  1. Cluster Disk 1 renamed Client Storage SAS
  2. Cluster Disk 2 renamed Client Storage SSD
  3. Cluster Disk 3 renamed Witness Disk

Is this possible without shutting the cluster down?

Thanks
Tony




File Share Witness - Timeout delay

$
0
0

Hi,

we are planning to setup a two node dual Data Center cluster as follows:

1. Node1 - France DC1

2. Node2 - France DC2

3. File Share Witness Server - Brazil DC1

FSW Server is in Brazil according to MS best practices to place a Witness server to 3rd location. The problem is that in case of latency issues between France <> Brazil we can have issues with High Availability. I know that in case of FSW is down the services are still up and running, but lets say we are patching Node1 and rebooting it and while doing that there is a connectivity issue issue and FSW is inaccessible for few seconds - in this case the whole service will go down. 

Now, I created a simulation in Azure where I placed those two nodes and Domain Controller to West Europe, FSW in South Brazil and created a VLANtoVLAN VPN. Setup a cluster and setup a File Share Witness. And when I turn of sharing on the folder on FSW server the Files Share Witness Cluster Core Resource goes down in like 3 seconds!

So the question is - "Is there any parameter in Registry or via PowerShell where I can set the timeout for File Share Witness Cluster Resource?" . The goal is to prevent such a fast reaction - to mitigate latency issues. 

Thanks.

Node Disk Management

$
0
0

We have a 3TB iSCSI Lun assigned to our Exchange server in our cluster and in Node Disk Management it shows as 2 drives. A 2048GB CSVFS Primary Partition and 1024GB as unallocated. Of which we cannot make any changes to the unallocated partition.

But on the Exchange server that is running as a VM on this LUN. We have 3 partitions. What I am concerned about is what will happen when we exceed 2TB on one of the partitions. Is the Node Disk Management display of 2 partition's something I can ignore. Or should I look at moving my VM off of this LUN an creating a new one and verifying that I shows all 3TB before. Moving it back?

 

Sql Server 2014

$
0
0
Can we configure active passive clustering on sql server 2014 without any licence?

Failover Cluster Manager snap-in error

$
0
0

Hey guys, need help on this one as I'm fairly new to clustering.

I just built a 2-node cluster, which seems to be running OK, but when I go to the Failover Manager > Roles, I get this very long and not very helpful error:

I don't really know what to do at this point other than breaking apart the cluster and starting from scratch again. Maybe removing then re-adding the roles and features?

Any help would be appreciated. Thanks.

Failover cluster 2012r2 issue -Domain:Win 2008

$
0
0

Hi Everyone,

i am facing the below issue for to create failover cluster,


Cluster nodes: Windows 2012R2 Datacenter (2 servers)

Domain : Windows 2008 (Forest/Domain functional level:2008)

using Domain administrator login.

given all permissionlike Create Computer objects and Read All Properties

the below error for review,

Beginning to configure the cluster hypervcluster1.

Initializing Cluster hypervcluster1.

Validating cluster state on node GSEHYPERVSRVR1.xxxx.org.

Find a suitable domain controller for node GSEHYPERVSRVR1.xxxx.org.

Searching the domain for computer object 'hypervcluster1'.

Bind to domain controller \\masterdc.xxxx.org.

Check whether the computer object hypervcluster1 for node GSEHYPERVSRVR1.xxxx.org exists in the domain. Domain controller \\masterdc.xxxx.org.

Computer object for node GSEHYPERVSRVR1.xxxx.org exists in the domain.

Verifying computer object 'hypervcluster1' in the domain.

Checking for account information for the computer object in the 'UserAccountControl' flag for CN=hypervcluster1,CN=Computers,DC=xxxx,DC=org.

Enable computer object hypervcluster1 on domain controller \\masterdc.xxxx.org.

Configuring computer object 'hypervcluster1 in organizational unit CN=Computers,DC=xxxx,DC=org' as cluster name object.

Get GUID of computer object with FQDN: CN=hypervcluster1,CN=Computers,DC=xxxx,DC=org

Validating installation of the Network FT Driver on node GSEHYPERVSRVR1.xxxx.org.

Validating installation of the Cluster Disk Driver on node GSEHYPERVSRVR1.xxxx.org.

Configuring Cluster Service on node GSEHYPERVSRVR1.xxxx.org.

Waiting for notification that Cluster service on node GSEHYPERVSRVR1.xxxx.org has started.

Forming cluster 'hypervcluster1'.

Unable to successfully cleanup.

An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...

An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...

There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes.

An error occurred while creating the cluster.
An error occurred creating cluster 'hypervcluster1'.

This operation returned because the timeout period expired

To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster.


Technet microsoft solutions.


Validate Network Communication Error on UDP port 3343

$
0
0

    Both side firewall is off but still i get this error. Server1 is windows server 2016 and server2 is windows server 2012 R2. even i added allow connection rule in advanced firewall setting as outbound and inbound.

    "Network interfaces Server1 - vEthernet (Live) and Server2 - vEthernet (Live) are on the same cluster network, yet address 172.16.0.1 is not reachable from 172.16.0.5 using UDP on port 3343."

Create cluster resource based on HSM login

$
0
0

Hi,

I have some Windows 2008 R2 clusters with SQL Server running on them.  Each node in the cluster(s) is hooked up to an HSM.  When a server is rebooted, the first thing we have to do is login to the HSM, which is currently a manual operation done via a DOS prompt.  Once the login's been verified (we get a message returned to screen), we know we can then start using the box and fail over our SQL resources to it.

Ideally, I'd like to take the human element out of it, just in case someone rebooted a server and forgot to log in to the HSM before failing over to it.  I would like to automate the process of logging into the HSM, but I also want to make sure that the node doesn't present itself as available to the cluster until it's logged in.  I was thinking along the lines of creating a new cluster resource, e.g. Generic Script, or Generic Service based off the current manual process, and setting SQL to be dependent on it.  However, what I really want is for the node to not be available until it's done the HSM login.

Any idea how I can accomplish this?

Thanks

Andrew


Thanks, Andrew
My blog...

Adding a new Cluster Node with different memory speed?

$
0
0

I have a requirement to add additional nodes to an existing Hyper-V Failover Cluster.

Dell have quoted for the new hardware and everything is identical to the existing kit, except for the memory speed, which they are unable to match.

I've spent some time looking into this, but I can't find any clear statements from Microsoft or other third parties.

Could anyone tell me if this would cause the Cluster Validation Wizard to fail and/or any other problems with the operation of the cluster? 


 

Scale Out File Server SMB redirection locking up CSVs

$
0
0

Problem - Physical hosts have HyperV running and a vhdx located in a SOFS CSV (HyperV hosts different than SOFS cluster nodes).  During start up of the VM when SMB redirection occurs or when trying to move CSVs with an active SMB connection between cluster nodes locks up the CSV.  

All physical hosts and VMs are Windows 2012 R2 with updates to ~July 2016
All physical hosts are Cisco C220s with latest OS updates and 1 update behind on firmware
SOFS is a two physical node cluster with SAS connected JBOD
4 CSVs exist, all exhibiting the same issue
SOFS cluster nodes have the below networks:
Mgmt - teamed 10G - no cluster use
cluster0 - single 10G nic - cluster only
cluster1 - single 10G nic - cluster only
SOFS0 - single 10G nic - cluster/client
SOFS1 - single 10G nic - cluster/client (currently set to none for troubleshooting)
Backup - Teamed 10G - no cluster use
LiveMigration - Teamed 10G no cluster use/only network for live migrations
Cluster validation runs clean
When nothing is connected to the CSV shares I can fail CSVs and SOFS role without any errors
Currently each CSV is used by a single HyperV server and has a single vhdx in it.

HyperV host networks
SOFS0 - single 10g nic
SOFS1 - single 10g nic
Backup Team
Mgmt Team
Customer Network Team

I believe both problems are related;
Problem 1)
CSV share is owned by SOFSA
When I boot a VM with a secondary vhdx located in SOFS (OS is in local RAID disk), checking the SMBClient logs on HyperV host and SMBServer logs on SOFS hosts I can see:
HyperV host hits SOFSB.  
HyperV host connects and share is seen as asymmetric/continuous availability transfer.  Witness registration completes.  
SOFSB issues redirect to SOFSA.  
HyperV host gets redirection request and establishes connection to SOFSA (4 event log messages, SMB client reconnect, session reconnect, share reconnect and witness registration). 
At the same second as the previous 4 SMB reconnect messages, but last in sequence. so the 5th message, a message is received to redirect to another cluster node.
HyperV looses session and share during reconnect and SMB Client successfully moved, but no messages on session or share reconnect.
After 59 seconds on the SOFSA I have errors the re-open failed (event id 1016), client session expired
After 60 seconds HyperV registers a request timeout due to no response from server.  Server is responding to TCP but not SMB (event id 30809)
HyperV host then immediately registers a connections to SOFSB for the share, goes through the same redirection sequence to SOFSA (who owns the share).  SMB Client, session reconnect, share reconnect, witness registration successful.
2 seconds later on SOFSA I have a reopened failed, the file is temporarily unavailable (event ID 1016)  I can see the source/destination/share that matches with what is occurring.  Error just continues every 5 seconds.
If I go and try to 'inspect' the drive from HyperV it times out and on SOFSA I get a warning (event ID 30805) client lost its session - Error {Network Name Not Found} - The specified share name can not be found share name \SOFSClusterName\$IPC
Now we just repeat errors client established session to server, lost session to server network name not found server \SOFSClusterName - same session ID in connect/disconnect for each pair of connect/disconnect

Now the great part - 
If I go into failover cluster (FOC) and I try to move the CSV to the other node, the CSV gets stuck in pending offilne.  After a few minutes any other CSVs owned by the same node go into pending offline and hang.  I can reboot and wait 10 minutes for it to finally die and failover or wait 20 for FOC to completely die on both nodes of the cluster.  In the cluster logs, the SOFS node is never fully releasing the CSV to move.  The last message you will see related to teh volume is:
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} transitioning from 4 to 2.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} moved to state 2. Reson 7; Status 0x0.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} transitioning from 2 to 1.

Normally you see :
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} transitioning from 4 to 2.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} moved to state 2. Reson 7; Status 0x0.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} transitioning from 2 to 1.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} moved to state 1. Reson 5; Status 0x0.
Volume4; Volume target path \??\GLOBALROOT\Device\Harddisk39\ClusterPartition1; File System target path \??\GLOBALROOT\Device\Harddisk39\ClusterPartition1.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} transitioning from 1 to SetDownlevel. Local true; Flags 0x1; CountersName
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} moved to state 3. Reson 3; Status 0x0.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} transitioning from 3 to 4.
Volume {c7cdc2d5-e1f9-40c5-b36d-43523e2996f1} moved to state 4. Reson 4; Status 0x0.

Issue is consistent across all 4 CSVs I have.  I believe the issue has always existed.  If I get the HyperV hosts lined up right to initially hit the SOFS server that owns the CSV, everything boots up fine.  When it doesn't VMs and FOC hangs and I have to go through reboots and VMs loose their drives and I have to reboot those as well. It only when it gets redirected to a different SOFS server that the issue comes up which leads me to the next problem.

Problem2: 
Assuming all the VMs connected to the right SOFS CSV owner on boot and everyone is running/working fine for days/weeks/months (yes this has been sitting around for a while as unresolved problem).  If I try and move a CSV for SOFS maintenance purposes the CSV hangs in offline pending.  Eventually the FOC hangs and I have to spend 2 hours to get things lined up right (after I do what ever I was planning on doing) so the VMs boot.

Things done/verified
Windows firewall is off
I've turned off IPv6
Removed Teaming from all nodes using SOFS0/1 network and cluster0/1 (used to be windows team vs individual networks)
Turned off client/network access from SOFS1 network
turned off CSV balancer - hindsight doesn't work without it due to redirection of CSVs due to asymentic storage
updated permissions for SOFS share to include HyperV host, SOFS cluster nodes - didn't make any difference/never see access denied errors

One item I see I don't understand is on the SOFS cluster nodes, in SMBClient/connectivity logs, I see network connection failed to the cluster adddresses:

The network connection failed.
Error: {Device Timeout}
The specified I/O operation on %hs was not completed before the time-out period expired.
Server name: fe80::98f9:c138:xxxxx%32
Server address: x.x.x.x:445
Connection type: Wsk
Guidance:
This indicates a problem with the underlying network or transport, such as with TCP/IP, and not with SMB. A firewall that blocks port 445 or 5445 can also cause this issue.

The server name is the 'Tunnel adapter Local Area Connection* 12:' on the other SOSF cluster node.  So SOFSA generating errors to SOFSB and SOFSB generating errors connecting to SOFSA.   This was occuring before and after the cluster0/1 network interfaces were teamed



Thanks-








2012R2 file server cluster from NTFS to ReFS?

$
0
0

I am about to move some volumes from a 2008R2 cluster to a 2012R2 cluster. I then need to copy the same data to a new SAN, and thought since I am copying the data to a new SAN that I should consider trying to go from NTFS to ReFS.  I am not even 100% sure if 2012R2 clusters support ReFS, or the best ways to copy the data(and metadata) from NTFS to ReFS.   Does anyone have any information on this?

Thanks,

Dave



Viewing all 6672 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>