Monday, March 31, 2008

xSQL Bundle receives highest marks and praise from SSWUG

Just realized that we never published a link to this review of our database comparison tools xSLQ Object and xSQL Data Compare. The review was conducted by Stephen Wynkoop, Microsoft SQL Server MVP and SSWUG founder who was duly impressed with our products and gave them the highest rating possible in all aspects of the review, overall score, installation, usage and real-World usefulness. You can read Stephen's review here:
http://www.xsqlsoftware.com/reviews/sswug_xsql_compare_review.pdf

Thursday, March 27, 2008

User an idiot no more!

Not more than 10 years ago it was not unusual to hear programmers vent their frustration about the users who were all “idiots“, and see users sheepishly admit of having clicked the “wrong button”. During those 10 years things have changed dramatically in both sides – the users are increasingly savvy and their expectations have increased significantly while the programmers, no more shielded by the mystique of the profession, dread hearing a user proclaim “who is the moron that wrote this code!”. Nowadays it is common to see puzzled users tell their programmers “what do you mean that can’t be done…?”, nowadays there is no such thing as “the wrong button”, nowadays if something goes bad with the application it is not the user who is the idiot but rather the programmer is the inept moron.

Not trying to make any point here – just an observation on one of the aspects of the evolution of the profession. Do you agree with this observation?

SQL Server Resource Index

Here is a great page http://www.sqlserverindex.com with links to SQL Server resources: official sites; global user groups; learning resources; events; books; third party SQL Server tools, local user groups, and local SQL Server consultants on the major US metropolitan areas.

Wednesday, March 12, 2008

SQL Server Profiler – a lesson learned the hard way

Background: I was recently at the site of a client who was concerned that the CPU utilization had shot up in the last few months without any apparent reason. Since this was a node on an Active/Active cluster and was supposed to be the failover node for their more important node they had moved quickly and upgraded from a 64-bit dual processor to a 64-bit quad. The CPU utilization had dropped from over 70% to somewhere in the 30-35% range however the concern was still there – they wanted to find out what caused the trouble in the first place.

Trouble starts: After spending a bit of time understanding the environment, looking at the trends and documenting the current status I decided to start a trace on the effected node and save a few minutes of data on a SQL Server table. I was feeling pretty good about it until I clicked on “start” and the trace fired away. I looked at the CPU utilization and I noticed, to my utter surprise that it dropped significantly and it was staying down – I did not like that and stopped the trace immediately (it must have been running for about 65 seconds).

There, now I had another puzzle on my hand – was that a simple coincidence that as soon as I started the trace it so happened that the “demand” on the server dropped, maybe a great number of users just logged off and went to eat lunch or was it something else?! Soon after a very concerned user came buy and was telling the resident IT person in charge that this third party application had gone haywire – it had started assigning case numbers from the beginning of time creating duplicate entries in the system etc…

Now I am not going to go into how totally unacceptable it is for a professional software application to behave that way… regardless, it was my trace that somehow caused this trouble that ended up taking hours of manual labor to clean up.

So, now I am stuck, I am looking at all kinds of reports but without current trace data I can’t see what’s going on and after what happened I don’t dare start another trace.

Solution: After I ran out of alternative options I decided to give it another try – this time I was not going to run a trace from SQL Server Profiler on my remote client machine but instead a server side trace. Furthermore, instead of storing the trace data on SQL Server I directed it to file on another drive. I got the script ready and keeping my fingers crossed executed it while watching the CPU utilization – this time I was pleasantly surprised to see that nothing seemed to change, the trace was not affecting the CPU utilization at all – I also had someone monitoring that third party application to make sure that it wasn’t going haywire again – everything seemed good.

I let the trace run for a whole hour. I downloaded the trace files (a couple of gigs of data) and uploaded all the data in a SQL Server database on my local machine. Now things were a lot easier – it did not take more than a few minutes to identify the query that was causing all the trouble - a trigger from that third party had failed and a couple of tables had ballooned to hundreds of thousands of rows when they were supposed to have no more than something like 50 rows! Trigger was fixed and CPU utilization went down to 10% and it is staying at that level.

Lesson: Overall it was a great success but not without some bruising on the way. The lesson learned – never run a client side trace on a sensitive production system it can bring the server to its knees.

Related Info: In case you don’t have much experience and are wondering how to run a server side trace here is a quick guide:

  • Start SQL Server Profiler on your local machine;
  • Click on “New Trace” and connect to a non-sensitive server – if you have an instance of SQL Server on your local machine all the better – just connect to that one;
  • Define your trace;
  • Start the trace;
  • Stop the trace;
  • Go to File / Export / Script Trace Definition / For SQL Server [2005 / 2000];
  • Open the just saved script and change the path/file where you want your real trace data to go. Also, make sure all the other parameters are what you want them to be.
  • When you are ready to start the trace on the “real” server simply execute the script against the target server.
  • Note down the trace id so that you can stop and delete it when you are done.

Lastly, check out our products a lot of them are free - see the left panel here.

Wednesday, March 5, 2008

xSQL Documenter released

We are very excited to announce a new addition to our suite of cool SQL Server tools, xSQL Documenter a professional grade tool for documenting your databases.You can read more about xSQL Documenter on our website at: http://www.xsql.com/products/database_documenter/ but here is a quick list of what this new tool brings:
  • Supports all objects in all major dbms-es including SQL Server, Oracle, DB2, MySQL
  • Generates HTML and compiled CHM output
  • Generates dependency and primary/foreign key graphs
  • Shows DDL/XMLA code for all documented objects
  • You can run it from the command line
  • You can easily brand the documentation
You can download an evaluation copy of the xSQL Documenter from: http://www.xsql.com/download/database_documenter/ Please do post your comments and suggestions regarding this products or any of our other products at: http://www.xsql.com/suggestfeature.aspx.