I got this comment from the post yesterday:
“It's just as easy to use gigabit instead 10/100. Further, where is this administration being done? Most likely there is some other pipe that will have to be shared between any amount of connections. And at somepoint, the throughput would be too much for the harddrives or data bus to give data.
Interesting idea, and it might have potential but, how much would that actually add? Or, what would the situation have to be for it to matter?”
If I understand the comment correctly, the question is if adding a really big connection type, like gigabit Ethernet adaptor (presumably on a gigabit LAN with good routing) wouldn’t that be the same as adding more NICs to the system? Wouldn’t the other components in the system become a bottleneck before the NIC was overrun?
The answer, of course, is that “it depends”. Ratings on components (like gigabit on an Ethernet card) does not mean that you get that number of actual throughput. The key is to monitor and measure, so that you know for certain where the bottlenecks are. It could very well be true that a single NIC (regardless of the speed) is sufficient for a particular server and workload.
But adding another NIC isn’t just for speed. It has advantages of being able to secure that connection separately from the user traffic. You can also monitor the types of traffic with more certainty if you know that Extract, Transform and Load (ETL) or backups are on another channel. It also isolates the bus traffic on the server better.
So there are advantages to having another NIC, even if they don’t directly affect performance. Like all Best Practices, this suggestion represents suggestions that can help your system’s speed, stability and security.
And Brian – thanks for responding! Keep those cards and letters coming…