In the news, SQL Server is now out of Mainstream Support – and I’ll show you where to find out what support options you have. In this week’s feature, I’ll explain the basic concepts of SQL Server Clustering. The web link
News this week:
Mainstream support ended on April 8th of 2008 for SQL Server 2000 – but that doesn’t mean you don’t have any support options. You can find more details at http://tinyurl.com/t8jhr, but the upshot is that you should have your migration complete from this system version today, if you don’t already have that in place.
This week I’ll start giving you an overview of the major features you can find in SQL Server 2005. I’ll cover each feature in more depth, but we need a place to start the discussion. I’ll begin with a quick overview of the database engine and how it works, and then we’ll dive into clustering systems together.
SQL Server, at it’s very core, consists of two things: an engine that processes requests to change or read data, and a series of files to hold the data. Inside the engine, which is composed of a set of Windows services that run programs on the server. Those programs accept connections across various network protocols (or perhaps not, if you configure them that way) and they respond to those connections by a name. That name is tied to the Windows system name where the software is installed by default.
When SQL Server is installed it is actually called an Instance, meaning a single installation of the engine software, referring to a set of files on the system that are the databases. But in SQL Server you can install the software again, and certain parts of the engine and all of the database files are copied again. The first time you install SQL Server you can name the Instance, in which case you refer to the server by the Windows name, then a back-slash, then the instance name you give the system. If you don’t name the instance, it is called the “default”, and is simply known by the name of the Windows system where it is installed. All other installations of SQL Server on that same system must be named. Of course, installing the software multiple times still uses the same hardware, so you want to make sure that you have enough system resources such as memory and CPU to service them all.
Those are the very basics, but they are enough to get us started with the discussion of Clustering. Microsoft Windows Servers have the ability to work together, exposing only one name to the network, even though more than one server is answering the requests from the network. Should one of the systems (called a node) lose connection to the network, another node can take over without any disruption to the resources they were sharing. This is called a failover cluster.
SQL Server can be installed on a failover cluster, but more importantly it uses the cluster software and protects your database applications automatically. It does this because in a cluster, the database files are shared between the systems, so if one engine can’t answer the requests for data the engine on another node can. The users refer to the name of the cluster for the SQL Server Engine, regardless of the actual Windows name of the individual system where the SQL Server Instance is installed.
This might sound a bit confusing, so let’s look at the entire process from start to finish.
To begin, you ensure that at least two hardware setups (they don’t have to be the same exact models, but it simplifies things) are on the Hardware Compatibility List (or HCL) for Windows certified clustering systems. You share one drive between them, and there are a few ways to do that, which will store a set of files that will keep the systems in synch. You also want to use at least two network cards in each system. One of the network cards is used to talk to the users, and the other is used to talk between each server, called a “heartbeat”. This heartbeat is what tells each system whether they need to take over or not.
You install the Windows Server operating system on all the servers, and then you configure clustering services. There’s an entire process to do this that I won’t cover here. The point is that when you are done, you have one server we’ll call System A, another one called System B, and a name that both of them are known by called System C. When you finish that installation, you check the system to make sure you can fail from one system to another, by connecting to a share on System C while you shut down System A.
With all of that complete, you put the installation media for SQL Server on System A, and SQL Server will automatically detect that the system is part of a cluster. You’ll be asked if you want to automatically install SQL Server on all the nodes in the cluster, and you’ll be asked a few other questions as well. After the installation, users will connect to System C, even though System A or System B will be the one that actually processes the request. Should either node go down, the other continues to assume the name of System C, and your users are none the wiser.
You can use a cluster not only to protect your systems in case of a hardware failure, but also to provide a method to apply service packs and upgrades with minimal downtime.
This has been a quick overview, and of course there’s a lot more to know about this technology. It’s available on the Standard version of SQL Server and higher, with more nodes available the higher you go in both operating system and SQL Server version.
Most shops are now putting in 64-bit systems, which handle memory differently than a 32-bit server. This week’s link from Microsoft shows you how to determine the proper Page File setting for a 64-bit system running Windows Server 2003. Find it at http://tinyurl.com/5b87na.
Tip of the Week:
If you install SQL Server on Windows Server 2003 or later, SQL Server accounts can take advantages of some of the same policies for security as Windows Accounts. That means that you can enforce password length and complexity requirements on SQL Server accounts as well as those that use Windows Authentication.