August 2008 Archives

Basic Relational Theory

| No Comments

When I got started with SQL Server (when Microsoft first picked it up), I had worked on several database systems and even written a couple. However, I had little formal "relational" theory education--not until I took an Extended Relational Analysis (ERA) course which helped make the job of designing efficient databases a lot easier. Frankly, I don't remember where this course was given or who gave it but it sure looks like the course being offered by Relational Systems Corporation. I highly recommend this course to anyone building a relational database. Knowing how to normalize a new database or denormalize an existing database is only a small part of the database implementation process. ERA teaches developers how to interview a customer to determine what data needs to be stored and how it should be laid out in a database like SQL Server. The course also talked about practical reality that suggests denormalization makes sense in some selected cases (so to speak). Based on the number of questions I'm seeing lately, I think more developers (especially those who play the role of DBA) need to take the time to pick up this skill.



The title says it all. Just experimented with this and it seems the Visual Studio (Server Explorer) Team has relented to let the SQL Server 2K8 SSMS team use this (IMHO) very useful tool. ;)

I also like the way that when you request to execute a SP, SSMS scripts a routine to call it. Nice touch.

SQL Server Enterprise Puffery

| 1 Comment

Having been part of a marketing organization within Microsoft and other companies, I can understand why some folks feel it's important to show the Rolls Royce product in its best light. However, when I see how SQL Server Enterprise is being marketed, I'm dismayed. Consider that while Microsoft makes a bundle on SQL Server Enterprise Gold Trim Edition (a much as $25,000), they make quite a bit (if not more) on Workgroup and Standard (about $4000-$5000). And yes, even the free editions (Express and "SQL Server" Compact) bring in a lot of revenue indirectly.

IMHO, it does not serve Microsoft or make the Enterprise Edition Rolls Royce model look any more appealing  by slamming the Chevrolet models. By showing that SQL Server 2008 Standard Edition does not have anything but "partial/limited" support for all of the bullet points on their comparison charts, it makes it tougher for customers that can only afford Workgroup or Standard (or Express) from choosing it as a DBMS solution and going back to the office to defend their decision. It also makes it harder for Microsoft-loyal consultants and mentors to recommend any but the Enterprise Edition. What these simplistic charts don't show is that the "limited" features in Workgroup, Standard (and Express) are often far more than they need--but that's hard to discover reading this marketing fluff. The problem is, I know it's fluff and puffery--the customers might not. 

When I was in sales (I've worn a lot of hats over the years), one of our guiding principles was to not bash the competition (and not take checks). Apparently, Microsoft has not learned that lesson (yet). They might defend their assessment of Oracle 11G to the death, but any Oracle fan will look at this and see it like an Obama supporter sees one of the McCain attack ads.

I also waded into the "SQL Server" Compact edition marketing material. Nowhere did it say this (really cool) product is not based on the same binaries that all of the other editions share. I did see where it's recommended for use on "Web client " implementations. It leads customers to think this "embedded SQL Server database engine..." is another version of SQL Server like MSDE and SQL Express but somehow more limited--it's not.

I also noticed that none of these pages permitted feedback or bug reporting. I guess the SQL Server marketing team is open for output only. That's sad as there are a lot of us out here that are passionate about SQL Server and only want it to be as good as it can be and not oversold with hyped-up ads and propaganda.



Paul Randall and Kimberly Tripp came to the Redmond .NET User Group last night and gave an animated talk on indexing--a subject that both of them know cold. I thought I would share a few tidbits that struck me as important. Consider that Kimberly and Paul are SQL Server experts and don't always worry about "developer" issues. When asked what they thought of the Entity Framework their reaction was... less than enthusiastic. If you ever get a chance to listen to this married couple speak, jump in an take a seat. They are informative and fun.

  • Optimize Row Size: SQL Server 2005 (and later) supports 8K columns. This means a row can be, well, over 80K. Does this make sense? Ah, not usually. Managing row size is as important as ever to those interested in efficiency. The trick to good performance and good use of space is to make sure that when the 8K page is filled with rows, there is little wasted space. This means that if the row size is over (about) 4K, only one row will fit on a page and (about) 4K of space is wasted on the page. A secondary problem is that the number of index pages also has to increase to address the pages.
  • Selectivity: When the query optimizer (QO) studies your SQL, the degree of selectivity determines if an index should be used to perform the operation. By processing the Statistics (or "stats") for an index (and an index can have from one to many (many) sets of stats), the QO can determine the selectivity. Basically, it's weighing the choice of using the index to walk through the selected rows or doing a table scan. The example Kimberly used made it pretty clear how it worked but we were surprised to learn:

"When the number of rows (selected by the query) is around 1/4 of the number of data pages in the table, the index was not useful and it is more efficient to perform a table scan to fetch the rows selected. This often turns out to be less than 5% of the rows in the table..."

  • Rebuilding Statistics: SQL Server automatically rebuilds the statistics for you but consider that statistics are invalidated once 20% or more of the data has changed. There are times when it's important to update the statistics manually.
  • Filtered Indexes: New for SQL Server 2008, permit you to add a WHERE clause to an index, thus focusing the index on the most important rows.
  • Choosing the right column to index: Kimberly showed several examples of how GUID or overly long index values could materially affect performance. She suggested use of a simple identity value (incrementing integer) for clustered indexes--just avoid wide keys on clustered indexes.
    • "The best (clustered) index is unique, narrow and static"
  • Using Order By: Even when a table has a clustered index (which stores the data in physical order), SQL Server does not guarantee that rows will be returned in that (or any particular) order unless an ORDER BY clause is used.
  • Avoiding Index and Page Fragmentation: When an index page is filled, a new page must be created and about half of the index entries on that page are copied to the new page. This (necessary) process can be minimized by setting the Fill Factor to an appropriate value when first building the table or rebuilding the index. A Fill Factor of 100 is appropriate for RO data, but 50 is a better choice for R/W data. Note that the default is "0" which leaves very little room for expansion.
  • Page Locks: There are any number of reasons pages can be locked while performing queries or updates. Paul reminded us that if you use the NO LOCKs hint in TSQL, your data queries might return the same row(s) more than once. He suggested finding the root cause of the locking problem before resorting to different isolation levels or lock hints.

I asked a number of questions (as usual):

Question: Does the QO know which data or index pages are already in the cache?
Answer: No. The QO does assume that root-level index pages are cached, it does not know if leaf-level or data pages are cached.

Question: Since SQL Server Developer Edition implements Enterprise Edition functionality, is there a way to tell it (the SQL Server engine) to pretend to implement the Workgroup, Standard or even Express edition subset functionality? This way developers could make sure their applications were tuned for the targeted server.
Answer: No--but this is a good suggestion. (But no one from the Microsoft product team was there to hear it).

Question: Since it takes time and resources to sort a returned rowset, does it make sense to do so when the target display element is a DataGrid that sorts the data again? Consider that best practices dictate that the number of rows returned be 500 or fewer.
Answer: Probably not.

Kimberly and Paul mentioned the "DTA" (Data Engine Tuning Advisor) as a useful tool to determine the health of your SQL Server database.


References:,category,Indexes.aspx (compression details)

I've just discovered the Rosetta Stone for the AdventureWorks sample databases that you can install to demo or test SQL Server. It was created by Pikauba Software Solutions for SQL Server 2005, but I expect that many of the details are going to be similar for 2008. See this.

I was wrong. Actually, I was mislead and believed what I was told about Visual Studio 2008 SP1 and SQL Server 2008 (Katmai) RTM. Apparently, the Katmai report processor was not incorporated into the ReportViewer (or as they now call it the "MicrosoftReportViewer") control during SP1. While it might look like Microsoft had upgraded the ReportViewer (it has the new "Tablix" control and other Reporting Services 2008 features), the RDL generated by the newly released Katmai RDL code generators is NOT compatible with Visual Studio 2008 SP1 version of the ReportViewer control. This means you can't take that set of reports you've generated with the new Katmai BI tools and import them into VS 2008 like you could take Yukon reports and import them into VS 2005 BI tools. They tell me that it's going to be another 6 months before the updated  ReportViewer control hits the streets. By then we should have plenty of Katmai reports to import!

Sure, you can still run Katmai reports in the ReportViewer Server mode, but you can't expect to import them into your Visual Studio 2008 applications until they are finished with the conversion to the new reporting engine. I also expect this to be implemented as a Visual Studio 2008 service pack which might further delay its introduction.

Frankly folks, this is kinda irritating. If you just bought a new car and got home only to find that it had a hole where the glove compartment was supposed to be, wouldn't you be mad? While it's only a minor feature in the bigger scheme of things, how is one to know what features are not shipping with a new product. One would normally assume that if the feature was implemented with the old version, it would be in the new--but perhaps improved, but included non-the-less. I think Microsoft should have a list of unsupported or deprecated functionality for these products. That's so when we get home, we have somewhere to put our maps and proof of insurance. Well actually, they do. It's here, but it does not include information about Visual Studio tools like the ReportViewer.

Be sure to start SQL Server 2008 Reporting Services Configuration Manager and check out the new default virtual directory name. Notice that the team has finally changed the name to be IIS standard notation. No longer will you see MyServer$MyInstance as the VD name. Now it's MyServer_MyInstance.

8-13-2008 4-04-13 PM

More tips to follow...

When you open an existing RDL report, you'll find that Visual Studio's BI tools convert it automatically to the new "Tablix" control. Unfortunately, each cell (which used to show the first part of the expression being rendered) now shows "<<expr>>". This, IMHO, is a step back. Now you have to open a property window and drill in two layers to inspect the setting.


I needed to install the latest AdventureWorks database(s) on my systems here and I found the instructions less than helpful. I finally got them installed and wrote a white paper on how to do it--screen shots, figures, tables and all. See it posted on Developer.Com


CLR Executables Fail to Deploy

| 1 Comment

After having installed Visual Studio SP1 and SQL Server 2008, I wanted to make sure my existing CLR demos worked. I have a number of unique examples (that are included in my latest book) that illustrate basic to complex functions, stored procedures, aggregates and user-defined types--all implemented in CLR (Visual Basic.NET) languages.

So, I attached my existing Biblio example database to the new SQL Server 2008 instance and pulled up an existing (and working) example in Visual Studio 2008 (SP1). Since I wanted to be able to use step-through debugging, I switched to the X86 configuration (this is a 64-bit system and the Framework does not support 64-bit STD). This was my downfall. Once I clicked on F5 to compile, deploy and debug the example, it failed with:

An error occurred in the Microsoft .NET Framework while trying to load assembly id 65737. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error:
System.IO.FileLoadException: Could not load file or assembly 'prjtablefromdelimitedlist, Version=, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The given assembly name or codebase was invalid. (Exception from HRESULT: 0x80131047)
Sure, I checked Google and found a MS article on the issue--or it said it was. Unfortunately it did not help. I tried the other suggestions. I then created a new test database and tried to deploy the CLR function there. It worked. What was different? Well, in the second case, I had left the configuration on "Any CPU".

I filed a Connect bug. If you think this is an issue, vote for it.


Powered by Movable Type 4.21-en

About this Archive

This page is an archive of entries from August 2008 listed from newest to oldest.

April 2008 is the previous archive.

September 2008 is the next archive.

Find recent content on the main index or look in the archives to find all content.