I attended the SQL Server2005 General Session this afternoon led by David Campbell. There were so many new features talked about for SQL Server 2005 it was hard to keep up. I wrote some of them down to help me remember.
Winter Corporation does annual survey of companies that have implemented large databases. SQL Server is in there with several of the largest. www.wintercorp.com choose the number of rows category and usage OLTP.
SQL Best Practice Analyser (BPA) RTM today and available on the web. Tom Rizzo did demos of the BPA. You can scan a single machine or a group of machines. You can have groups of different rules and there are over 70 rules with various customizations available. The outputs intergate with SQL Reporting Services and you can view history scans in SQL Reporting Services.
The Data Platform is the architectural vision for the product. Encompassing the lifecycle of the birth of data to the retirement of data and having business value throughout that lifecycle. Using all forms of data including relational, semi-structured, and object oriented data. This is supported by the CLR inside SQL amongst other things. Our value proposition is lower total overall cost. This means no expensive addons, the worlds best tools and the lowest cost of deployment with integrated management.
10 Terror (Tera) Bytes support, SQL Server ran in labs on 64 bit in Dec 98. The SQL labs largest machine is currently a 64 bit 64 way machine with 1Tb of RAM.
Partitioning is useful to have current live data on expensive mirrored high availabilty drive but also very old data which is read only but must be kept around on cheap disk (as it's read only you can always restore from tape if it gets lost).
The persenter took a question in the middle of this bit about performance of partitioned data which he answered by talking about the query optimizer and how it can improve queries that only access one partition. Well handled.
There's a new locking technology introduced, so that readers don't block writers and writers don't block readers. I think it was called consistent read.
There's a new mirroring technology for geographic redundancy where they continuously replay the log on a secondary machine and alsoc ontinuously run database recovery so that on failover the second machine can be available within seconds. And they have a separate machine called a witness which can check which SQL machine is available and cause automatic failover.
When disks go bad! There's an online restore which can occur on a specific partition without taking the rest of them offline.
Eek, now he's talking about "Oh oh". That means you can get a Database Snapshot which is created instantly, read only and doesn't even require a complete copy of the data. Clever, what it does it copies each data page only when it changes in the production database. All of the common pages are shared on the disk. Hence why it can be instantaneous.
Server transparency is a feature that lets you query the running system without logging in.
Password policy from Windows group policy is replicated to SQL Server 2005. In other security news SQL 2005 has many new features off by default.
Tom Rizzo showed us the data encryption demo again that we saw in the keynote this morning. It's the Woodgrove Financial Services demo showing cell level encryption, and when you log in as a allowed person you can see the encrypted data. He drilled down and showed us the SQL required for this. Here's the encryption line, there was also a little bit of key setup:
update CreditCard set CreditCardNumber =encryptByKey(KEY_GUID('PowerKey'), CreditCardNumber), CreditCardSecurityCOde=encryptByKey(KEY_GUID('PowerKey'), CreditCardSecurityCode)
CLR integration. There's a hosting layer to coordinate between the CLR, the SQL Engine and the SQL OS. Not throwing out T-SQL and redoing everything in C#. The CLR allows for extension of scalar functions, table value functions, types, aggregates (such as a moving average) and procedures.
VS2005 supports a new SQL Server 2005 project model which Tom also showed.
You can have a column of type XML in SQL 2005. You can also mark a group of stored procedures as web services for use within your Service Oriented Database Archtiecture (SODA). There's the SQL Service Broker. The programming model for SQL Service Broker is in SQL, and this allows you to store a unit of work for SQL to run. It's queued and run as a separate transaction. You can use one SQL instance to log the work request and a second SQL instance to execute the queued instance.
Query notifications allows you to subscribe to the results of a query.
Tom has a demo of developer enhancements which are also available as hands on labs. They plan a whitepaper of when to use T-SQL and when to use managed code. He used Visual Studio .NET to create a C# method and a T-SQL stored proc and timed how long each one took. He used this syntax:
CREATE ASSEMBLY SQLCLRPerf and a DLL path
The C# one took 1846mS and the t-SQL one took 89000mS. This performance improvement is dependant on the type of work being done.
Bill Baker came up next to talk about the BI components. The key element of BI is the relational engine, eg Partitioning. Not all of the audience stayed for this, showing the still slightly lower interest in BI. DTS for ETL has a new pipeline archtiecture instead of get from here and put there. Sort of like a query optimiser and they say it's 2 orders of magnitude faster than SQL 2000 DTS.
Analysis services is adding a unified dimensional model, like a cube on steriods. This allows either looking at it as a heirarchical model or from an attribute model. Now a cube is really just a storage optimisation and in SQL 2000 they made this a separate store. In 'pro-active caching' you can query the relational database directly using OLAP, it will collect the data from the relational store (slow) and also start caching the aggregates (to be fast).
There are 6 new data mining algorithms added to the original 2 and they are talking about adding 2 text mining algorithms. Sounds interesting.
The olapreport shows MS leadership in OLAP now. www.olapreport.com
Reporting services is having the crank turned again for SQL Server 2005. Bill talked about the purchase of ActiveViews for client side reporting for end users which will be included as part of reporting services. End users will be able to report on data without creating a report, and they can create reports from the adhoq querying tool.
Everything for working with SQL Server works in Visual Studio .NET. SQL will come with the Visual Studio shell if it's not already installed.
At the end here's Bill's new idea for Business Intelligence. Traditional BI is used to make a better decision as a feature of an application. It could however also be used to make a better application. His example is for data validatoin showing an application where a person enters their age and the number of years of education they have. They can use a segmentation algorithm to check this instead of just a simple rule. So running a clustering algorithm allows rapid checking of regular data.
There's also manageability stuff that was so fast I didn't get it down.
This posting is provided "AS IS" with no warranties, and confers no rights.