The “but no changes were made on the server?!” Fallacy

We hear this in the field constantly… a server begins experiencing difficulty or somehow acts “different”… and the first thing we hear is usually something along the lines of “no changes were made!”

Truth be told, there is only one time when no changes are being made on your server… when it’s turned off. And even that is questionable.

…but you haven’t even logged into the server? FYI, just because you aren’t doing something with your server doesn’t mean it isn’t doing anything. Memory is being managed… files are being created, updated, deleted…  indexes are being updated… files are being transferred… files are being scanned for viruses… backups are being run… network connections are being used… policies are being updated… users are being authenticated… logs are growing… all while you were in Tahiti on your vacation. Oh… and don’t forget that other person you gave the Administrator password to the other day? Yeah… they deleted some critical file and didn’t tell you. They also installed utilities, deleted several log files (to make more space of course), made a critical system change, updated a registry key (without checking to find out if the change they made was supported), visited 5 websites (including one that requested permission to run a program… with a virus), and installed a hotfix… which obviously made them too busy to tell you that they made some serious changes to your server.

Oh… by the way… your Active Directory administrators pushed down an updated group policy that added additional security settings on key files… and your AV solution decided to download new signatures (which decided to flag your primary application executable as a virus and block access).

The point is, your servers are constantly changing. Every second of every minute, while you’re sipping your mai tai in Tahiti, your server is working, making people happy, exactly as it has been told to. It has also logged all of that data that you never clean up into the temp folder on the C drive… and it’s using the paging file that you’ve configured on the C drive… and it’s saved that CD ISO to your desktop (on the C drive)… so when it crashes because it can’t make the page file any bigger to accommodate the 10 more users that have made crazy demands of it… don’t blame Microsoft or the server. It was doing exactly what it was supposed to do.

The question is, were you paying attention? Did you have monitoring? When you deployed that new web part and then configured it into your master page so that it appears on every single site in your environment… did you load test it first? Did you ensure that what works fine with 1 user also works fine with 1000 users? At the same time? Over the course of the day? When you’re updating your enterprise search indexes? When you’re unknowingly being scanned by the other enterprise search product you didn’t know was crawling your entire site?

Servers are constantly changing, as is the workload they’re trying to do and the environment they’re operating in. When they can’t keep up with what we’re asking them to do, it’s our jobs to keep an open mind, to understand what might have changed (even if we didn’t change it), and understand the relationship between what failed and what might have changed.

Just because you didn’t change your server doesn’t mean it’s the same today as it was yesterday.

Comments (0)

Skip to main content