New File Share Witness Feature in Windows Server 2019

This blog discusses a new feature in the upcoming release of Windows Server 2019.  Currently, Windows Insiders receive current builds of Server 2019.  We urge you to become an Insider and play a part in making Windows Server 2019 the best that it can be.  To do so, go to this link and sign up.

One of the quorum models for Failover Clustering is the ability to use a file share as a witness resource.  As a recap, the File Share Witness is designated a vote in the Cluster when needed and can act as a tie breaker in case there is ever a split between nodes (mainly seen in multi-site scenarios).

I don’t want to go through the list of all the requirements but do want to focus on one in particular.

  • The Windows Server holding the file share must be domain joined and a part of the same forest.

The reason for this is that Failover Cluster utilizes Kerberos for the Cluster Name Object (CNO) to connect and authenticate the share.  Therefore, the share must reside on a domain member that is in the same active directory forest.

There are scenarios where this is not possible.  These scenarios are:

  1. No or extremely poor Internet access because of a remote location, so cannot use a Cloud Witness
  2. No shared drives for a disk witness. This could be a Storage Spaces Direct hyper-converged configuration, SQL Server Always On Availability Groups (AG), Exchange Database Availability Group (DAG), etc.  All of which do not utilize shared disks.
  3. A domain controller connection is not available as the cluster has been dropped behind a DMZ
  4. A workgroup or cross-domain cluster where there in no active directory CNO object

We have had a lot of requests over the years for how to get around these scenarios and there wasn’t a good story for it.  Well, I am here to tell you we listened, and we produced something better than a workaround.

In comes Windows Server 2019 and the new File Share Witness feature to the rescue.

We can now create a File Share Witness that does not utilize the CNO, but in fact, simply uses a local user account on the server the FSW is connected to.

This means NO kerberos, NO domain controller, NO certificates, and NO Cluster Name Object neededWhile we are at it, NO account needed on the nodes.  Oh my!!

The way it works is that on the Windows Server you wish to place the FSW, create a local (not administrative) user account, give that local account full rights to the share, connect the cluster to the share.

Let’s take for example, I have a server called SERVER and a share called SHARE I want to utilize as the File Share Witness.  For creating this type of File Share Witness can only be done through PowerShell.  The steps for setting this up are:

  1. Log on to SERVER and create a local user account (i.e. FSW-ACCT)
  2. Create a folder on SERVER and share it out
  3. Give the local user account (FSW-ACCT) full rights to the share
  4. Log in to one of your cluster nodes and run the PowerShell command:

Set-ClusterQuorum -FileShareWitness \\SERVER\SHARE -Credential $(Get-Credential)

  1. You will be prompted for the account and password for which you should enter SERVER\FSW-ACCT and the password.

Viola!!  You are done as we just took care of all the above scenarios.  The cluster will keep the name and password encrypted and not accessible by anyone.

For those scenarios where a additional server is not available, how about using a USB drive connected to a router?  Yes, we have that capability and just as simplistic as setting it up on a server.

Simply plug your USB drive into the port in the router and get into your router’s interface.  In there, you can set up your share name, username, and password for access.  Use the PowerShell command above pointing it to the router and share, and you are good to go.  To answer your next question, this works with SMB 2.0 and above.  SMB 3.0 is not required for the witness type.

Please try out this new feature and provide feedback through the Feedback Hub app.

Thanks,
John Marlin
Senior Program Manager
High Availability and Storage