Friday 29 July 2011

Life OF a DBA in Nutshell: SQL SERVER Generate Database Script for SQL Azure...

Life OF a DBA in Nutshell: SQL SERVER Generate Database Script for SQL Azure...: "When talking about SQL Azure the common complain is that the script generated from stand-along SQL Server database is not compatible with S..."

SQL SERVER Generate Database Script for SQL Azure

Wednesday 20 July 2011

Microsoft SQL Azure (Cloud Server)

Microsoft has started working on cloud and as of now been released few products based on cloud computing – if you have heard buzz called SQL Azure – so probably be thinking what is Microsoft SQL Azure.
SQL Azure is cloud based technology for people who want to work with RDBMS model in cloud space.
You just think of one scenario you are working in organization where you have to manage databases that include software to manage it and dedicated server definitely and after few months you probably asking for some patched to fix some database related issues. This is not the end- you have to manage high availability, disaster recovery that may increase overall cost if your planning is to scale up or scale out.

Other Cloud Database may be USED for such functionality.

But with a cloud technology especially you can manage hosted database – this will probably decrease your cost overall. But with cloud technology you will have to face scaling issues.

Why SQL Azure in Cloud?

SQL Azure is an environment for managing RDBMS databases on cloud they are already distributed on different node.
SQL Azure is not like an environment where you have to deploy your databases on different machines for different purposes SQL Azure is already deployed on different sql instances.
You do not have to worry about its patch or you have to manage it physically, just do all stuff on cloud with all RDMBS feature plus you can pay some cost and scale up your RDMBS environment. This means you are saving physical architecture cots of RDMBS environment when you need it to scale out.
SQL Azure provides you scalability, availability and reliability anyway. Azure is an environment of clustered SQL Servers based in cloud in Microsoft data centers.

SQL statements not supported for SQL Azure

When working with SQL Azure, you primarily have to work with Transact SQL (TSQL) instead of a graphic user interface. There is limited functionality that is available that will give you somewhat of a basic user interface for working with SQL Azure. As such you need to have a good grasp on Transact SQL.

Having said that, you will find that quite a few SQL statements are just not supported for SQL Azure Cloud databases. Some of these make sense as they interact with the operating system or the underlying hardware. Remember SQL Azure is in the cloud environment so an end-user will have limited need for Hardware/OS related issues. Regardless we have compiled a few SQL commands that you may have used in your on-premise SQL Server 2008, however they will simply not work in SQL Azure. Few select statements you can use supported in  SQL Azure environment.

Some facts about Microsoft SQL Azure:

a)        Microsoft has released backup functionality in SQL Azure just copy object.
b)       Microsoft may Support full text indexing in SQL Azure.
c)        Each Table must have clustered index. Tables without a clustered index are not supported.
d)       Each connection can use single database. Multiple databases in single transaction are not supported.
e)        ‘USE DATABASE’ cannot be used in Azure.
f)        Global Temp Tables (or Temp Objects) are not supported.
g)       As there is no concept of cross database connection, linked server is not the concept in Azure at this moment.
h)       SQL Azure is shared environment and because of the same there is no concept of Windows Login.
i)         Always drop TempDB objects after their need as they create pressure on TempDB.
j)         During buck insert use batch size option to limit the number of rows to be inserted. This will limit the usage of Transaction log space.
k)       Avoid unnecessary usage of grouping or blocking ORDER by operations as they leads to high end memory usage.
l)         Microsoft may support rich Reporting Services.
m)      Distributed transaction not supported in SQL Azure Cloud.
n)       Select Into statement is not supported in SQL Azure.

Now, ready to take off – install SSMS 2008 R2 to create your own database in cloud and user database can be 1gb to 5gb on web edition. SQL Azure database support standard API like ODBC, ADO.NET these two technologies are very famous to manipulate data.

Tuesday 19 July 2011

DBCC CHECKDB (Transact-SQL) for Very Large Data Base


CHECKDB Consistency Checking Options for a VLDB (Very Large Data Base) This is a question that comes up a lot - how to run consistency checks on a VLDB
Nowadays hundreds of GBs or 1 TB or more databases are now common on SQL Server 2000 and 2005. Any experienced DBA knows the value of running consistency checks, even when the system is behaving perfectly and the hardware is rock-solid. The two problems that people have with running a full CHECKDB on their VLDB are:
  • It takes a long time to run.
  • It uses lots of resources – memory, CPU, IO bandwidth, tempdb space.
Even with a decent sized maintenance window, the CHECKDB may run over into normal operations. There's also the case of a system that's already pegged in more or more resource dimensions. Whatever the case, there are a number of options:
  • Don't run consistency checks
  • Run CHECKDB using the WITH PHYSICAL_ONLY option
  • Use SQL Server 2005's partitioning feature and devise a consistency checking plan around that
  • Figure out your own scheme to divide up the consistency checking work over several days
  • Offload the consistency checks to a separate system
Use WITH PHYSICAL_ONLY
A full CHECKDB does a lot of stuff - see previous posts in this series for more details. You can vastly reduce the run-time and resource usage of CHECKDB by using the WITH PHYSICAL_ONLY option. With this option, CHECKDB will:
  • Run the equivalent of DBCC CHECKALLOC (i.e. check all the allocation structures)
  • Read and audit every allocated page in the database
So it skips all the logical checks, inter-page checks, and things like DBCC CHECKCATALOG. The fact that all allocated pages are read means that:
  • If page checksums are enabled in SQL Server 2005, any corruptions caused by the IO subsystem will be discovered as the page checksum will be checked as part of reading the page into the buffer pool
So there's a trade-off of consistency checking depth against run-time and resource usage - but this option will pick up problems caused by the IO subsystem as long as page checksums are enabled and present.
Use the SQL Server 2005 partitioning feature
If you're using the partitioning feature in SQL Server 2005 then you're already setup for this. Given that you've hopefully got your partitions stored on separate filegroups, you can use the DBCC CHECKFILEGROUP command.
It makes sense that you don't need to check the read-only filegroups as often as the current month's filegroup so an example consistency checking scheme could be:
  • Run a DBCC CHECKFILEGROUP on each read-only filegroup every week or two
  • Run a DBCC CHECKFILEGROUP on the read-write filegroup every day or two (depending on the stability of the hardware, the criticality of the data, and the frequency and comprehensiveness of your backup strategy).
I know of several companies who've made the decision to move to SQL Server 2005 in part because of this capability to easily divide up the consistency checking.
Beware that until SP2 of SQL Server 2005, DBCC CHECKFILEGROUP would not check a table at all if it was split over multiple filegroups. This is now fixed and DBCC CHECKFILEGROUP will check partitions on the specified filegroup even if the table is now completely contained on the filegroup.
Figure out your own way to partition the checks
If you're on SQL Server 2000, or you just haven't partitioned your database on SQL Server 2005, then there are ways you can split up the consistency checking workload so that it fits within a maintenance window. Here's one scheme that I've recommended to several customers:
  • Figure out your largest tables (by number of pages) and split the total number into 7 buckets, such that there are a roughly equal number of database pages in each bucket.
  • Take all the remaining tables in the database and divide them equally between the 7 buckets (using number of pages again)
  • On Sunday:
    • Run a DBCC CHECKALLOC
    • Run a DBCC CHECKCATALOG
    • Run a DBCC CHECKTABLE on each table in the first bucket
  • On Monday, Tuesday, Wednesday:
    • Run a DBCC CHECKTABLE on each table in the 2nd, 3rd, 4th buckets, respectively
  • On Thursday:
    • Run a DBCC CHECKALLOC
    • Run a DBCC CHECKTABLE on each table in the 5th bucket
  • On Friday and Saturday:
    • Run a DBCC CHECKTABLE on each table in the 6th and 7th buckets, respectively
In pre-RTM builds of SQL Server 2005, DBCC CHECKTABLE could not bind to the critical system tables, just like with T-SQL - but that's fixed so you can cover all system tables in SQL Server 2000 and 2005 using the method above.
There's one drawback to this method - a new internal database snapshot is created each time you start a new DBCC command, even for a DBCC CHECKTABLE. If the update workload on the database is significant, then there could be a lot of transaction log to recover each time the database snapshot is created - leading to a long total run-time Use a separate system
This alternative is relatively simple - restore your backup (you are taking regular backups, right?) on another system and run a full CHECKDB on the restored database. This offloads the consistency checking burden from the production system and also allows you to check that your backups are valid.
  • If the production database is several TB, you need the same several TB on the spare box. This equates to a non-trivial amount of money - initial capital investment plus ongoing storage management costs. (Hopefully a future release will alleviate this – while at Microsoft I invented and patented a mechanism for consistency checking a database in a backup without restoring it.)
  • If the consistency checks find an error, you don't know for sure that the database is corrupt on the production system. The only way to know for sure is to run a consistency check on the production system. This is a small price to pay though, because most of the time the consistency checks on the spare system will be ok, so you know the production database was clean at the time the backup was taken.
Summary
We have many choices to allow you to run consistency checks, so there's really no excuse for not knowing  that something's gone wrong with your database....Cyaa next time...

Wednesday 13 July 2011

Genesis of Sql server from 1989(creation) till date……..

Hiii friend’s m back with my technical post hope u all like it. Depending on my profession i thought to write something related to SQL server and keeping in mind that it's my 1st technical blog post so let’s pen down history of SQL server from creation-till date...

Microsoft SQL training is important to IT professionals interested in knowing how to work on the product. A history of Microsoft SQL server is also very important. Basically, the code for MS SQL came from the Sybase SQL Server, which was the first database Microsoft attempted. It competed against Sybase, IBM and Oracle. Then, Sybase, Microsoft, and Ashton-Tate worked together to create the first version of the SQL Server. It ended up being pretty much the same as the third edition of the Sybase SQL Server. Then, the Microsoft SQL Server 4.2 was available in 1992. When the 4.21 version was available it was at the same time as Windows NT 3.1. The first version of SQL that did not include any assistance from Sybase was the Microsoft SQL Server v6.0.

When Windows NT made an appearance Sybase and Microsoft moved on to pursue their own interests. This allowed Microsoft to negotiate exclusive rights to the versions of SQL that were written for Microsoft systems. The Sybase server actually changed its name to Adaptive Server Enterprise to keep it from being confused with the Microsoft version. Many revisions have been made without assistance from Sybase since the two parted ways. The first database server written on GUI was a complete change from the Sybase code.
In the ten years since release of Microsoft's previous SQL Server product (SQL Server 2000), advancements have been made in performance, the client IDE tools, and several complementary systems that are packaged with SQL Server 2005. Performance has been improved, complementary systems are now available with the system, and client IDE tools are included. Some of the new systems included are Analysis Services, ETL, and messaging technologies like notification services and service broker.
SQL Server 2005 (codename Yukon), released in October 2005, is the successor to SQL Server 2000. It included native support for managing XML data, in addition to relational data. For relational data, T-SQL has been augmented with error handling features (try/catch) and support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new indexing algorithms, syntax and better error recovery systems.

SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of allowing usage of database connections for multiple purposes.

SQL Server 2005 introduced DMVs (Dynamic Management Views), which are specialized views and functions that return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.

SQL Server 2005 introduced Database Mirroring, but it was not fully supported until the first Service Pack release (SP1).

After approx. 5 years release of Sql server 2005 the great SQL Server 2008 R2 (formerly codenamed SQL Server "Kilimanjaro") was announced at TechEd 2009, and was released to manufacturing on April 21, 2010.SQL Server 2008 R2 adds certain features to SQL Server 2008 including a master data management system branded as Master Data Services, a central management of master data entities and hierarchies. Also Multi Server Management, a centralized console to manage multiple SQL Server 2008 instances and services including relational databases, Reporting Services, Analysis Services & Integration Services.
SQL Server 2008 R2 includes a number of new services, including Power Pivot for Excel and SharePoint, Master Data Services, Stream Insight, Report Builder 3.0, Reporting Services Add-in for SharePoint, a Data-tier function in Visual Studio that enables packaging of tiered databases as part of an application, and a SQL Server Utility named UC (Utility Control Point), part of AMSM (Application and Multi-Server Management) that is used to manage multiple SQL Servers
The next release of SQL Server, code-named Denali, is right around the corner. This version of SQL is one of my favorite of all the releases which came with huge improvements when it arrived in 2011 with its CTP1 release. As mentioned it’s my favorite of all so I will get deep into its detailed feature in my next post.

Over all Summary of SQL evolution:


In 1988, Microsoft released its first version of SQL Server. It was developed jointly by Microsoft and Sybase for the OS/2 platform.

  • 1993 – SQL Server 4.21 for Windows NT
  • 1995 – SQL Server 6.0, codenamed SQL95
  • 1996 – SQL Server 6.5, codenamed Hydra
  • 1999 – SQL Server 7.0, codenamed Sphinx
  • 1999 – SQL Server 7.0 OLAP, codenamed Plato
  • 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0)
  • 2003 – SQL Server 2000 64-bit, codenamed Liberty
  • 2005 – SQL Server 2005, codenamed Yukon (version 9.0)
  • 2008 – SQL Server 2008, codenamed Katmai (version 10.0)
  • 2010 – SQL Server 2008 R2, Codenamed Kilimanjaro (aka KJ)
  • Next – SQL Server 2011, Codenamed Denali
Please provide all your suggestions based on the same. Finally thanks all readers for reading my first technical post & Feel free to comment your opinions and suggestions….






Tuesday 12 July 2011

My 1st Ever Blog Post......

Well, here it is, my 1st ever blog post on my own personal blog. I feel like I am sitting in a new car looking around going, "Wow, this is mine". 1st and foremost, I have to thank my sis n all my frnds who helped me to start off with this blog. I am truly grateful, and we will certainly learn more about database n data-ware housing  down the road.

All the usual rules apply.....
become a follower....post about almost every thing on your blog....and leave a comment

So, PLEASE, offer your comments, suggestions, (and most importantly) involvement into this blog post
take care and Thank you in advance for your support & everyone who have left me comments over the post i REALLY appreciate every one of them