Top 5 OLTP performance improvements you would like to see in the next version of SQL Server


Hello Everyone,


We are currently looking at top performance issues for various workloads and how we can improve those in next version of SQL Server. We are collecting feedback from various customer sources. I would like to extend an invitation to readers here for feedback on top 5 OLTP performance improvements you would like to see in SQL Server.


Please bear the following in mind:


1. Briefly describe the OLTP workload or scenario or application. More details you can provide for us the better


2. Prioritize the list of performance improvements in order of importance or impact to your workload. You can also just highlight the top 5 performance problems with running your OLTP workload in SQL Server


3. Feel free to suggest any feature(s) that will help in improving the performance of the particular OLTP workload or scenario or application


I will consolidate the feedback and post summary version of it as a link in this post.


Thanks
Umachandar

Comments (31)

  1. Partitioning and load balancing (I’ve writtten to Lubor Kollar about it)

  2. Are you Datamart Builder with few bones to pick? Are your cubes too slow?Ok, so I can’t do anything to…

  3. Floyd says:

    To compete in the Grid Computing space w/Oracle ( RAC )

  4. sqletips says:

    Thanks for your comments Stefan. I will ping Lubor and get your feedback.



    Umachandar

  5. Ivan Arjentinski says:

    Better support for those of us, trying to use static filtering:

    WHERE (col1 = @col1 OR @col1 IS NULL)

    AND (col2 LIKE @col2 OR @col2 IS NULL)

    AND (col3 > @col3 OR @col3 IS NULL)

    Another scenario is when the filters are stored in a filter table (this is because the filters actually arrive as XML to SS2005):

    WHERE (col1 = ISNULL((SELECT Filter FROM Filters WHERE colname = ‘col1’),col1)

    AND ….

  6. If you use SQL Server and have a great suggestion for performance improvements to the engine then post…

  7. If you use SQL Server and have a great suggestion for performance improvements to the engine then post…

  8. Dean says:

    Would it be possible to skip (unnecessary) checking of a DRI constraint in an UPDATE statement if the FK column’s value has not changed?

  9. This may seem a very odd request at first so I will give you some background to start with.

    We have a build server that first builds all our C# code then runs all our tests, so fare so good.   Part of the build process is to run all the sql script that create database tables etc, then run a lot of unit tests that talk to the database.   Each of our tests runs within a database transaction, we then abort the transaction after the test, so as not to change the state of the database.

    Therefore I would like a type of transaction that is VERY fast to run and abort, e.g. can I tell sql server at the start that I will never commit the transaction so that it can run faster?   Likewise as I don’t care about anything in the test database, can I tell SQL server to run faster and not care about being able to recover after a reboot etc?

    (After starting the transaction, we delete all data form all tables, before putting in the data that the given test needs)

    Ian Ringrose

    http://www.ringrose.name

    email address on website

  10. EHO says:

    please fix the SQL 2000 problem 469591

    ("When using sp_cursoropen with an Order By Clause we Generate an Inefficient plan ")

    if this is breaking existing apps, please consider a new cursor type / option or API function

  11. Vitaliy says:

    I have a problem in SQL Server 2005. In some cases SQL Server produces an execution plan of complex query (8 joins of views, some of views contains couple of joins) which does not contain a woktable creation in tempdb. As a result time of query execution increasion for about 5 seconds to about 4 minutes. All necessary indexes are created. Please add some hint which can enforce worktable creation. Thanks.

  12. Kristen says:

    I’d like to speed up developing and debugging – so slightly off topic, sorry!

    I would like to see some sort of “lint” warnings, e.g.

    If an implicit cast is used report it as a warning – e.g. by setting an optional higher warning level (like a C compiler has)

    If NO order by is specified return the data in random order – so that missing ORDER BY statements get picked up in Testing (or provide a Warning as above)

    Would separate DATE & TIME data types be considered a performance improvement? Fewer bytes in the record of course. Less need for triming the time off the date and so on.

    I suppose the much heralded UpSert statement is out of the question?

  13. Leonardo says:

    SQL Server 2005 unique indexes should allow multiple null values, restricting just not null values.

    create table a ( a int null)

    create unique index ix_a on a(a)

    insert into a values (0)

    insert into a values (null)

    insert into a values (null)

  14. susie dba says:

    1) STABILITY AND TESTING

    2) describe keyword.. like in Oracle.

    3) and of course.. the limit keyword– like in mySql.

    4) ability to consume Olap data in relational world easier

    5) simple data entry forms and reports– like Access Data Projects – but tested and bugfree.. fast stable, dependable

  15. aaron kempf says:

    a) better BUGFREE gui design tools

    b) optimization tools for sql statements

    c) better wizards for building tables.. think ‘table wizard’ from mdb

    d) auto-configure diskspace

    e) easier, zero-configuration email sending.. allow it to go through hotmail or livemail– or something external so that your local exchange Admin _CANT_ lock you out.

  16. mr sql authentication says:

    fix sql authentication; it’s still a security concern

    make it secure by default

    2/3rds of the companies i’ve been at over the past decard use sql authentication for SOMETHING.

    make it easier to conceptualize AD security tokens.. make something easily visible in a GUI that would allow you to say ‘I am User TOKEN#1221445453.. let me KEEP this token in a table; so that I can then use it to query this table a week later.

  17. harlan grove says:

    better & more statistical functions available; right out of the box

  18. dbahooker@hotmail.com says:

    i wish that every database in the world had a table called C that listed characters in ANSI and Unicode format… so it would be easy to display the letters A, B, C, D, F without writing our own table… So that it would be easy to filter our phone lists for people that start with the letter S… so that we could filter a list of cities for cities that start with the letter M.

    I wish that every database had a couple of date (dimension) tables.. and that they were called the same thing in every database in the world.. for translating between a date and an integer and a month and a year; etc.. and for displaying a list of every tuesday between now and christmas.

    I wish that every database in the world had a table called N where I could easily say ‘between 1 and 37’.. so literally.. just a table called N that lists numbers from say; one to 32,000…. and if I ever need to display a list of numbers in my query; then it’s easy to say ‘between 1 and 1700’ I would just write a simple sql statement and I would be done already.

    seriously though.

    I wish that I could ‘subscribe’ to an RSS table that would update once a month or something.. for ZipCode updates.. rather than having a dts package run once a month– make it built into SQL Server.

    right-click LINK to RSS source and set a schedule.

    EVERYBODY DOES THESE THINGS DIFFERENTLY

    EVERYBODY DOES THESE THINGS DIFFERENTLY

    having a standardized set of tools to do these sorts of things should make things easier for developers throughout the world.

  19. aaron kempf says:

    select * {like ‘%id’} from myTable

    this would return all of the columns from my table that ended with the letter ‘id’

    select * {like ‘%id’ abc} from myTable

    this would return all of the columns from my table that ended with the letter ‘id’ and it would put the columns in alphabetical order

  20. Renis Cerga says:

    Improve user defined aggregates by defining scan order. I stored procedures and functions using cursors that can’t be changed to aggregates because there is no scan order guarantee in aggregates.

  21. Louis Nguyen says:

    One is to have something similar to Oracle RAC, where multiple servers can work on the same data files.

    Two is leapfrog Oracle, by developing "two tier" replication as described by Jim Gray.  http://research.microsoft.com/~gray/replicas.doc

  22. joeydj says:

    i need i dataflow destination in SSIS that

    will send uncessary data to other space i mean

    a destination that eats the data and its gone

  23. joeydj says:

    i need i dataflow destination in SSIS that

    will send uncessary data to other space i mean

    a destination that eats the data and its gone

  24. sqletips says:

    General note. Please do not post general feature requests. We are looking specifically for performance issues only.

    And if you think that a specific feature will help improve performance then please comment briefly about that scenario.

    Thanks

    Umachandar

  25. bouchenafa says:

    Umachandar,

    Sometimes, one needs to bulk insert, update or delete a huge amount of data and he doesn’t necessarly waist time logging all these operations

    It would be noce to have an option to do it like the one we have with SELECT INTO, BULK INSERT or BCP tool

    Med

  26. bouchenafa says:

    Umachandar,

    Something that mae my life easier is a command that INSERT or UPDAYE automatically rows in a table

    Med

  27. imtu_174 says:

    I would like to see parallel processing if the query uses user defined functions.

    For example

    We have the following query that we were trying to execute and it performs very poorly…

    SELECT ITEM_ID,

    PART_NO,

    PART_TITLE,

    dbo.SFDB_NVL_VARCHAR(DRAWING_NO_DISP,’N/A’) AS DRAWING_NO,

    dbo.SFDB_NVL_VARCHAR(DRAWING_CHG_DISP,’N/A’) AS DRAWING_CHG,

    DRAWING_NO_DISP,

    DRAWING_CHG_DISP,

    REV_SOURCE,

    ITEM_ID AS ASSY_ITEM_ID,

    dbo.SFDB_NVL_VARCHAR(DRAWING_NO_DISP,’N/A’) AS ASSY_DRAWING_NO,

    SFMFG.SFDB_NVL_VARCHAR(DRAWING_CHG_DISP,’N/A’) AS ASSY_DRAWING_CHG

    FROM SFSQA_CHAD_PART_DSPTCH_DSP_SEL

    ORDER BY PART_NO

    The function SFDB_NVL_VARCHAR is nothing but a replication of nvl function of oracle.

    CREATE

    FUNCTION [dbo].[SFDB_NVL_VARCHAR]

    (

    @VI_SOURCE VARCHAR(4000),

    @VI_VALU_IF_NULL VARCHAR(4000)

    )

    RETURNS VARCHAR(4000)

    AS

    BEGIN

    IF ISNULL(@VI_SOURCE,”) =”

    RETURN @VI_VALU_IF_NULL

    RETURN @VI_SOURCE

    END

    Now if the same query was re-written to remove the function and instead use a CASE WHEN block the performance is up significantly….

    SELECT ITEM_ID,

    PART_NO,

    PART_TITLE,

    CASE WHEN DRAWING_NO_DISP IS NULL THEN ‘N/A’

    WHEN DRAWING_NO_DISP = ” THEN ‘N/A’

    ELSE DRAWING_NO_DISP

    END DRAWING_NO,

    CASE WHEN DRAWING_CHG_DISP IS NULL THEN ‘N/A’

    WHEN DRAWING_CHG_DISP = ” THEN ‘N/A’

    ELSE DRAWING_CHG_DISP

    END DRAWING_CHG,

    DRAWING_NO_DISP,

    DRAWING_CHG_DISP,

    REV_SOURCE,

    ITEM_ID AS ASSY_ITEM_ID,

    CASE WHEN DRAWING_NO_DISP IS NULL THEN ‘N/A’

    WHEN DRAWING_NO_DISP= ” THEN ‘N/A’

    ELSE DRAWING_NO_DISP

    END ASSY_DRAWING_NO,

    CASE WHEN DRAWING_CHG_DISP IS NULL THEN ‘N/A’

    WHEN DRAWING_CHG_DISP = ” THEN ‘N/A’

    ELSE DRAWING_CHG_DISP

    END ASSY_DRAWING_CHG

    FROM SFSQA_CHAD_PART_DSPTCH_DSP_SEL

    ORDER BY PART_NO

    Now the execution plan in both cases shows that cost of execution of the funtion is 0%. This is weird. How can function perform so badly for as simple as the one I mentioned,.

    Regards

    Imtiaz

  28. binarymechanic says:

    Stop flushing the cache server-wide for events that are specific to a database.  

    For example, detaching or dropping a production database (and other non-exciting events) currently causes this drop cache event found in trace SP:CacheRemove with a textdata  "2 – Proc Cache Flush".  To drop a database during the middle of the day should not be a critical performance impact as it currently is.

    Thanks,

    Robert Towne

  29. baxterg says:

    An unqualified DELETE (a delete without a WHERE clause) should be processed the same as a TRUNCATE TABLE when the table has no foreign keys referring to it and no delete triggers.

    Why bother going through the row-by-row delete, including all the logging overhead when it could just deallocate the pages like TRUNCATE does.

  30. kevindockerty says:

    I would like to be able to delete large chunks of data without having it logged from time to time, there by speeding up the process. Presently SQL Server will log every row that is deleted. I know TRUNCATE is an option, and that this wont log individual rows, but sometimes I dont want to delete everything in the table.

    Perhaps a hint of some kind ie..

    DELETE FROM tblData ( with nolog )

    WHERE datatype = ‘bad data’

  31. BobChauvin says:

    I would love to see a nolog hint for delete, as Kevin Dockerty suggested.

Skip to main content