We Are Right! Or are we? Two design issues come up for debate (posted by Paul)

Two random debates that have come up on our team lately...

 

The Great Index Debate

 

While fixing some bugs a dev on our team found that some pages of the app were loading slowly. Looking in more detail, he noticed that the SQL calls made were referencing a non-clustered index on a particular table. Noticing that the primary key wasn’t called that often compared to the other fields he moved the clustered index from the primary key to those fields that were frequently called. This yielded some nice performance improvements to the pages.

 

Sounds like a plan, right? However this sparked off a bit of controversy during code review. While there is no rule stating that the primary key should use a clustered index in most databases this is the case, so moving the clustered index to something else puts the table in a non-standard configuration.

 

Arguments against moving the index:

  1. It’s a non-standard DB design that now has to be documented somewhere, lest the next developer who works on this app isn’t aware of it and has to spend time asking why it is, or may inadvertently move the index back to the PK, thinking the move was a pervious dev’s error.
  2. It’s possible that another page/sproc will be created in the future that will frequently call the PK, necessitating us to move the clustered index back to the PK causing the page to slow down yet again.
  3. Moving the clustered index off the PK should only be done as a method of last resort, and instead we should revisit the pages in question and figure out how they or the sprocs they call for data retrieval can be reengineered for better performance using standard db indexes.

 

Arguments for moving the index:

  1. Dev confusion should not be an issue. If you properly document your database code and have good change management, then there is no risk whatsoever in having a “non-standard” clustered index assignment. Yes, it’s tragic that most devs who are otherwise fastidious about documenting and putting source control on their client and middle tiers are amazing lax about apply the same management to their data tiers, but that’s a dev issue and not a technical issue, and they should be using best practices.
  2. If needs necessitate the move of the clustered index back to the PK at a later date then so be it. Other changes will be needed then anyway, and it’s possible it will never happen in the apps lifetime. In the meantime, it’s a simple way of getting great perf improvements for minimal dev time.
  3. See points 1 & 2 above!

And so the debate raged on, which people having strong feelings on both sides. Have an opinion? Feel free to add your own comments. J

 

 

Surprising ASHX Developments

 

Unless you’ve been living under a rock you’ve heard of the AJAX programming model. I guess history repeats itself as – being an intranet application developer who can presume IE only – I remember doing this with RDS back 1998. Then SQLXML came out and all RDS had a bunch of security issues, so we simply switched to SQLXML to do data access from the client.

 

ASPX ushered in the view of everything being done via server-side code, and AJAX and JS client validation fell out of favor. Now we’ve come full circle and most people are using ASMX for their client side data calls.

 

But lo and behold there is another way to do this that allows you to just toss out a URL like SQLXML but is more flexible like ASMX. You can create an ASHX handler. Simply

  1. Implement the IHttpHandler interface on your class,
  2. Ensure that you’re passing in the HttpContext to the ProcessRequest method
  3. Set the Response.ContentType to xml

And you’re off and running.

 

Now you might be saying to yourself: Self, why do I need an ASHX when I can do the exact same thing with an ASPX page? In truth, there isn’t too much of a reason. But one thing I do like about it is that there is no .aspx file to worry keep around.

 

Then why not use ASMX like everyone else? With one ASMX you can support multiple methods, and they’re just as easy to program against using server or client side code. That’s actually a pretty good argument, but my experience has been if you just have one thing you want to accomplish and you need it to be really fast (like a combo box where you might have 100,000 possible items, but you’re only retrieving the top 50 or so on a keyup event) an ASHX is a good option to try.

 

The one big disadvantage I found with the ASHX is that the code is not compiled and hidden when you deploy your website. Normally all the code-behind is compiled into a .dll (or in Whidbey many .dll’s), but the ASHX is not. If you look at the file on your production site you can read the source code.

 

So far the only time we’ve implemented ASHX is when we have a combobox whose data is too big to retrieve all at once, e.g. a address book with 100,000 records. So every keyup we send a new request to the database via ASHX to get the top x records (say 50). This makes the lookup very fast and to the user seem as if we have all 100,000+ records on their local machine.

 

But like the index issue we now have a debate on whether or not ASHX is really useful, if we should use it, and if so why?

 

All this makes me curious given that in most cases there are other ways to achieve the same effect, what are other people using ASHX for?

 

Paul