The following write-up was originally an email I sent to Allan White (SQL Server MVP) in the context of a discussion on PowerShell and some of the PowerShell community postings questioning why SQL Server needed PowerShell support @ all given the strength of T-SQL. Allan and others encouraged me to post the response. I realize there are times when the SQL Server product team appears opaque. In general this is not intentional. There are situations when we’re working on very new stuff that cannot be discussed publicly (we function in a very competitive market), but generally we want to engage with the community and share the reasons we make certain decisions. I hope you find this posting informative and interesting. Note: I edited the original email just a little so that it made sense without the additional back and forth threads.
Within Microsoft we have a concept called Common Engineering Criteria (CEC). The goal of CEC is to establish consistency across IT products (Windows Server, Exchange, SQL Server, BizTalk, System Center, etc). Some of the requirements of CEC include items like virtualization support, security requirements, System Center support, localization requirements, and installation and patching requirements. I’m sure I don’t need to go into detail on why this is a good thing for our customers.
One of the latest CEC requirements is support for PowerShell. The old requirement used to be generic support for scripting. Imagine a world where every product used a different scripting syntax and had different constructs for error handling and procedural execution. Yuck! Just the cost to develop and maintain this would be enormous. There would be no consistency and each team would reinvent the wheel. This all might be fine if users never used more than one product. But we know that’s not the case. Lines of responsibility are blurring and IT professionals have to learn and use multiple products. Developers also must use multiple products.
While DBAs have traditionally been a specialized bunch and typically work solely with SQL Server we see this changing. For example, think of SharePoint. I had a number of people come up to me @ TechEd this year asking questions about SQL Server from a SharePoint perspective. These people have never written Transact-SQL and have no clue what a foreign key relationship is. They have no interest in learning T-SQL, rather they have a business need to keep the environment up and running – that’s what they’re measured on. If both products (SharePoint and SQL Server) implement a similar scripting language this person is instantly more comfortable – there’s automatic orientation. Yes there are new nouns and verbs to learn but there isn’t new syntax to learn. The fact that both products leverage the same scripting language places power in the user’s hands. They don’t have to “shell out” of one environment to make something happen in another.
Transact-SQL is a good scripting language but it’s not model driven. We will continue to invest in and evolve T-SQL. But we’ll also continue to invest in and evolve the models over DDL and DML. The model over DDL is SMO. The model over DML is EDM. We built PBM over SMO. This gave us instant backward compatibility with SQL2K5 and SQL2K. If we had to develop our own model over T-SQL we would have had to build the backward compatibility on our own. Frankly we wouldn’t have done this as the cost would’ve been too great. We also expose PBM in PowerShell which opens up use cases that wouldn’t be possible without extending the T-SQL language (an expensive proposition) or forcing DBAs to write C# or VB to accomplish seemingly simple tasks.
Take a database backup as an example. As part of the backup script I may want to find a volume which has greater than a minimum amount disk space and back-up to that drive. How do I do this in T-SQL? Or take a script that needs to be aware of the OS or processor the instance is running on. How do I get that from T-SQL? I’ve had a number of requests for help building policies which we just can’t do because either we don’t have a facet to support it or T-SQL can’t get access to the information. But if PBM had PowerShell support we’d be golden. As an aside, I really wish we added PowerShell support to PBM in the form of an ExecutePowerShellScript() function. We didn’t because of time constraints. Yes we can always add this stuff to T-SQL but we’ll be in a perpetual state of catch up. That’s a frustrating place for us and our customers.
Both PowerShell and our implementation on PowerShell are v1 products. They will evolve and get better. SMO is a v2 product and it’s evolving and getting better. In SQL11 we’ll make investments in T-SQL. Our goal in SQL Server manageability is to get out of the business of delivery one off tools and move to a state where everything is model driven. This creates an environment that is far more consistent and predictable. It allows us to build capability once and expose it in many places. For example, we want to get to the point where all we have to do is add a new object to SMO and it’s automatically available in PowerShell, SSMS, PBM, System Center, etc without making changes to those environments.
APIs aren’t always user friendly. The implementation may be pure from a coding and relationship perspective but from a usability perspective it may be hard to work with. Take a table object in SMO. It takes a lot of “work” to construct a new table in SMO. It’s far easier in T-SQL. But if I don’t know T-SQL and I know PowerShell wouldn’t a create-table cmdlet be valuable to me? The same goes for backing up a database. This is where the power of models and abstractions are realized. After all, when was the last time you wrote assembly code?