My database is 10GB, how big should my log be? …. I don’t know. J
That’s not enough information to decide. You might find the best answer for your particular system is 10MB or 10GB or 100GB. It really depends on the amount of activity in the database, whether you tend to have transactions that prevent truncation of the log, and how frequently you do backups. The actual data size is pretty much irrelevant.
Say you have a database with a single table with 1000 rows of data. Each row is updated several times per second and you take transaction log backups once a day. In that case the log will be much larger than the data. On the other hand, picture a 100GB database that is mostly read only and you take transaction log backups once every 15 minutes. The log for that system might be quite small, even only a few MB.
Expecting to be able to determine the log size from the data size is unrealistic unless your databases all follow the same basic pattern. Instead use a test system to look at how much log space is used at average and peak times for various transaction log backup frequencies and then pre-size your log file with some room to spare. Do NOT rely on autogrow outside of a testing period where you are trying to size the log as you can end up with “too many” VLFs, short and long term performance hits, and disk level fragmentation.
Factors That Can Delay Log Truncation
Note: Truncating/clearing the log is NOT the same as shrinking the log. Truncating/clearing is normal activity that marks existing space as available for reuse, shrinking changes the size of the actual file on disk and is something you only do in very rare, specific, planned, controlled, non-automated circumstances.