Monday 28 November 2011

Control your Android Phone remotely from your Computer, Download AirDroid Android app

Control your Android Phone remotely from your Computer, Download AirDroid Android app
Control your Android Phone remotely from your Computer
Developed by SAND STUDIO, AirDroid is an exceelent app to wirelessly control an Android Smartphone from web browser from a computer over W-Fi network.With beautiful interface the app will allow you to get done various task without even touching the phone. Just download and install AirDroid then open it, go to the web address from your desktop or other device, you will be able to:
  • Files: - Transfer files between Android devices and computers. Cut, copy, paste, search, rename or delete files on the SD card.
  • SMS - Read, send, forward or delete SMS messages.
  • Applications - Install, uninstall, backup, search apps and do batch processing.
  • Photos - Preview, delete, import, export, set as wallpaper and photos slide show from desktop.
  • Contacts - Group, search, create contacts, check and delete call logs.
  • Ringtones - Search, preview, import, export, delete and customize ringtones for phone calls, notifications and alarms.
  • Music - Play, search, import, export, delete, or set as phone call, notification and alarm ringtones.
The Web Desktop should be compatible with most modern web browsers, including Chrome 12 or later, Firefox 3.6 or later, Safari 5.0 or later (For best performance IE is not recommended).
Control your Android Phone remotely from your Computer Download AirDroid Android app
The app itself on the phone also has some good features such as showing Device Status: Real-time ROM, SD Card, Battery, CPU and RAM status report & monitor with charts to demonstrate available/used/total resources, and one-tap memory boost. Work as a Tasks Manager to Kill or uninstall running apps, batch operation supported. The Apps Manager option allows to Uninstall, share or check details of user and system apps. Also Control Cut, copy, rename, send/share, delete, sort and create visible or hidden folders and files. Watch below the video walkthrough of AirDroid features:

Latest Demo video of App Player running Android Apps on BlackBerry Playbook

Latest Demo video of App Player running Android Apps on BlackBerry Playbook

The who, what and where of winning with business process improvement

This is the second in a two-part series on business process improvement strategies and trends. In the first part, IT executives and experts discussed the growing trend of involving more front-line employees and even customers in their business process management (BPM) programs. Here, IT executives discuss how they got their BPM programs off the ground and the challenges of keeping them on track.
Call them the three W's of BPM. Where to begin a business process improvement project is just as important as whom to involve and what technology and methodologies to use to position a BPM program for success.
Some BPM experts opt for greenfield business process territory with no existing policies. Others tackle well-worn areas where business processes touch the most people and need the most help. John Verburgt, director of BPM at the Chicago Mercantile Exchange (CME) Group Inc., went after both, beginning with new products and project management in 2010.

Business process management where once there was none

The CME Group's business processes for new-product and project launches suffered from a "chronic illness" that Verburgt said most enterprises encounter: email overload.
"If I could eradicate a quarter-million emails that we were using to manage the business on a quarterly or annual basis, that time adds up," Verburgt said. "It is an explicit shift of labor from non-value-add, because the administrative and grunt work is just automated."
The art of BPM is in creating a highly adaptable and even amorphous solution that can allow for ad hoc, on-the-fly, user-controlled changes to a process.
Verburgt solved this problem by introducing Appian Corp.'s BPM product to automate processes. At the same time, he added structure around the new-product and project-launch processes by building a "very transparent pipeline so that all the people involved could see all the activities and where they stood in the process."
The BPM technology helped by presenting the new-product design process to stakeholders visually, akin to the way Microsoft Visio does, Verburgt said. "What you actually design is what actually happens, so gone are the days where you generate requirements in a waterfall fashion. You can turn requirements around in an Agile manner using BPM tools, because it takes the abstract and puts it in a tangible context."
This same approach was introduced for onboarding employees, a process footprint that "thoroughly amazed" him, Verburgt said, in terms of the number of steps his team needed to build to make sure new employees were ready to work from Day 1.
The next project was virgin territory: a customer portal that would allow CME Group customers to initiate tasks and business processes. Because no process existed, Verburgt was able to introduce new BPM tools and such techniques as the Six Sigma methodology. "But when you add the latest tools and techniques, things change," he said. "The culture changes, and the way people get their jobs done changes."

Business process change management done right

A year later, Verburgt has a beautiful problem: Once he introduced business process improvements across new-product launches, employee onboarding and a customer portal, the business lined up for more.
"You need to plan for success, not failure," Verburgt said. That meant that getting a handle on change management from Day 1 was "hands down" the biggest factor in getting BPM buy-in, he said.
Continuous, rapid, real-time improvement, which is any BPM director's ultimate goal, calls for governing change management through BPM tools and techniques; but above all, it requires buy-in from the people involved in making the changes happen. "People love to see business and process improvements. It's the same as eating right and exercising. It's good for you -- but whether or not you can get people to execute that for you is a whole different story."
Verburgt's advice: Don't build everything out up front. Although he introduced a BPM program in three areas, he did so at three-month intervals. The business process changes were not introduced all at once, but in lightweight cycles and with all stakeholders well aware of what changes were coming, he said.

BPM for the people

When it comes to change management, Todd Coffee, senior director of enterprise process solutions at Tenet Healthcare Corp. in Dallas, recommends taking a user-controlled approach.
"The art of BPM is in creating a highly adaptable and even amorphous solution that can allow for ad hoc, on-the-fly, user-controlled changes to a process. Without the latter, change management will always be a risk for successful BPM initiatives," Coffee said.
The use of Pegasystems Inc.'s BPM product lets Coffee's team bring applications to production much faster than they could before. "And our ability to make ongoing tweaks to the system has improved by orders of magnitude," he said. The technology lends agility to Tenet Healthcare's business process improvement efforts, but that agility in turn also results in rapid changes for users.
Having people on board is critical because it is not possible to automate all functions within a process. "Certainly not those that are the most valuable to the company," Coffee said. "It is essential, as you bring application solutions to individuals whose expertise is instrumental to the outcome of the process, to allow their expertise to be fully leveraged, not thwarted in the name of automation."
Coffee's approach to business process improvement began almost nine years ago with such low-value-added processes as one for contract review. Although such processes do not affect customers and may be low-value-add, they still must be fluid and flexible because stakeholders need to make ad hoc changes, and contract terms and conditions often change, he said.
Today Coffee's focus is on business process improvement in critical business areas including compliance and telemedicine. Tenet Healthcare's pilot telemedicine practice offers remote diagnosis and treatment to stroke patients from a network of national experts.
A combination of focusing on the people behind the process and the use of BPM tools has paid-off, Coffee said. Project initiation to production delivery times dropped from about four months to as little as six weeks.
And once a BPM program takes root, stakeholder demand will shift naturally to continuous improvement, he said. He strongly recommends a BPM center of excellence that includes a feedback loop for ongoing improvements. "The quality of the feedback loop depends on accurate and extensive reporting capabilities and close customer interaction. Plan to provide detail as to process volumes and process timing, including averages, as well as ranges; and have a mechanism to routinely engage stakeholders for their perspective on the applications."
 Redirect to:-
The who, what and where of winning with business process improvement

Wednesday 20 July 2011

Microsoft SQL Azure (Cloud Server)

Microsoft has started working on cloud and as of now been released few products based on cloud computing – if you have heard buzz called SQL Azure – so probably be thinking what is Microsoft SQL Azure.
SQL Azure is cloud based technology for people who want to work with RDBMS model in cloud space.
You just think of one scenario you are working in organization where you have to manage databases that include software to manage it and dedicated server definitely and after few months you probably asking for some patched to fix some database related issues. This is not the end- you have to manage high availability, disaster recovery that may increase overall cost if your planning is to scale up or scale out.

Other Cloud Database may be USED for such functionality.

But with a cloud technology especially you can manage hosted database – this will probably decrease your cost overall. But with cloud technology you will have to face scaling issues.

Why SQL Azure in Cloud?

SQL Azure is an environment for managing RDBMS databases on cloud they are already distributed on different node.
SQL Azure is not like an environment where you have to deploy your databases on different machines for different purposes SQL Azure is already deployed on different sql instances.
You do not have to worry about its patch or you have to manage it physically, just do all stuff on cloud with all RDMBS feature plus you can pay some cost and scale up your RDMBS environment. This means you are saving physical architecture cots of RDMBS environment when you need it to scale out.
SQL Azure provides you scalability, availability and reliability anyway. Azure is an environment of clustered SQL Servers based in cloud in Microsoft data centers.

SQL statements not supported for SQL Azure

When working with SQL Azure, you primarily have to work with Transact SQL (TSQL) instead of a graphic user interface. There is limited functionality that is available that will give you somewhat of a basic user interface for working with SQL Azure. As such you need to have a good grasp on Transact SQL.

Having said that, you will find that quite a few SQL statements are just not supported for SQL Azure Cloud databases. Some of these make sense as they interact with the operating system or the underlying hardware. Remember SQL Azure is in the cloud environment so an end-user will have limited need for Hardware/OS related issues. Regardless we have compiled a few SQL commands that you may have used in your on-premise SQL Server 2008, however they will simply not work in SQL Azure. Few select statements you can use supported in  SQL Azure environment.

Some facts about Microsoft SQL Azure:

a)        Microsoft has released backup functionality in SQL Azure just copy object.
b)       Microsoft may Support full text indexing in SQL Azure.
c)        Each Table must have clustered index. Tables without a clustered index are not supported.
d)       Each connection can use single database. Multiple databases in single transaction are not supported.
e)        ‘USE DATABASE’ cannot be used in Azure.
f)        Global Temp Tables (or Temp Objects) are not supported.
g)       As there is no concept of cross database connection, linked server is not the concept in Azure at this moment.
h)       SQL Azure is shared environment and because of the same there is no concept of Windows Login.
i)         Always drop TempDB objects after their need as they create pressure on TempDB.
j)         During buck insert use batch size option to limit the number of rows to be inserted. This will limit the usage of Transaction log space.
k)       Avoid unnecessary usage of grouping or blocking ORDER by operations as they leads to high end memory usage.
l)         Microsoft may support rich Reporting Services.
m)      Distributed transaction not supported in SQL Azure Cloud.
n)       Select Into statement is not supported in SQL Azure.

Now, ready to take off – install SSMS 2008 R2 to create your own database in cloud and user database can be 1gb to 5gb on web edition. SQL Azure database support standard API like ODBC, ADO.NET these two technologies are very famous to manipulate data.

Tuesday 19 July 2011

DBCC CHECKDB (Transact-SQL) for Very Large Data Base


CHECKDB Consistency Checking Options for a VLDB (Very Large Data Base) This is a question that comes up a lot - how to run consistency checks on a VLDB
Nowadays hundreds of GBs or 1 TB or more databases are now common on SQL Server 2000 and 2005. Any experienced DBA knows the value of running consistency checks, even when the system is behaving perfectly and the hardware is rock-solid. The two problems that people have with running a full CHECKDB on their VLDB are:
  • It takes a long time to run.
  • It uses lots of resources – memory, CPU, IO bandwidth, tempdb space.
Even with a decent sized maintenance window, the CHECKDB may run over into normal operations. There's also the case of a system that's already pegged in more or more resource dimensions. Whatever the case, there are a number of options:
  • Don't run consistency checks
  • Run CHECKDB using the WITH PHYSICAL_ONLY option
  • Use SQL Server 2005's partitioning feature and devise a consistency checking plan around that
  • Figure out your own scheme to divide up the consistency checking work over several days
  • Offload the consistency checks to a separate system
Use WITH PHYSICAL_ONLY
A full CHECKDB does a lot of stuff - see previous posts in this series for more details. You can vastly reduce the run-time and resource usage of CHECKDB by using the WITH PHYSICAL_ONLY option. With this option, CHECKDB will:
  • Run the equivalent of DBCC CHECKALLOC (i.e. check all the allocation structures)
  • Read and audit every allocated page in the database
So it skips all the logical checks, inter-page checks, and things like DBCC CHECKCATALOG. The fact that all allocated pages are read means that:
  • If page checksums are enabled in SQL Server 2005, any corruptions caused by the IO subsystem will be discovered as the page checksum will be checked as part of reading the page into the buffer pool
So there's a trade-off of consistency checking depth against run-time and resource usage - but this option will pick up problems caused by the IO subsystem as long as page checksums are enabled and present.
Use the SQL Server 2005 partitioning feature
If you're using the partitioning feature in SQL Server 2005 then you're already setup for this. Given that you've hopefully got your partitions stored on separate filegroups, you can use the DBCC CHECKFILEGROUP command.
It makes sense that you don't need to check the read-only filegroups as often as the current month's filegroup so an example consistency checking scheme could be:
  • Run a DBCC CHECKFILEGROUP on each read-only filegroup every week or two
  • Run a DBCC CHECKFILEGROUP on the read-write filegroup every day or two (depending on the stability of the hardware, the criticality of the data, and the frequency and comprehensiveness of your backup strategy).
I know of several companies who've made the decision to move to SQL Server 2005 in part because of this capability to easily divide up the consistency checking.
Beware that until SP2 of SQL Server 2005, DBCC CHECKFILEGROUP would not check a table at all if it was split over multiple filegroups. This is now fixed and DBCC CHECKFILEGROUP will check partitions on the specified filegroup even if the table is now completely contained on the filegroup.
Figure out your own way to partition the checks
If you're on SQL Server 2000, or you just haven't partitioned your database on SQL Server 2005, then there are ways you can split up the consistency checking workload so that it fits within a maintenance window. Here's one scheme that I've recommended to several customers:
  • Figure out your largest tables (by number of pages) and split the total number into 7 buckets, such that there are a roughly equal number of database pages in each bucket.
  • Take all the remaining tables in the database and divide them equally between the 7 buckets (using number of pages again)
  • On Sunday:
    • Run a DBCC CHECKALLOC
    • Run a DBCC CHECKCATALOG
    • Run a DBCC CHECKTABLE on each table in the first bucket
  • On Monday, Tuesday, Wednesday:
    • Run a DBCC CHECKTABLE on each table in the 2nd, 3rd, 4th buckets, respectively
  • On Thursday:
    • Run a DBCC CHECKALLOC
    • Run a DBCC CHECKTABLE on each table in the 5th bucket
  • On Friday and Saturday:
    • Run a DBCC CHECKTABLE on each table in the 6th and 7th buckets, respectively
In pre-RTM builds of SQL Server 2005, DBCC CHECKTABLE could not bind to the critical system tables, just like with T-SQL - but that's fixed so you can cover all system tables in SQL Server 2000 and 2005 using the method above.
There's one drawback to this method - a new internal database snapshot is created each time you start a new DBCC command, even for a DBCC CHECKTABLE. If the update workload on the database is significant, then there could be a lot of transaction log to recover each time the database snapshot is created - leading to a long total run-time Use a separate system
This alternative is relatively simple - restore your backup (you are taking regular backups, right?) on another system and run a full CHECKDB on the restored database. This offloads the consistency checking burden from the production system and also allows you to check that your backups are valid.
  • If the production database is several TB, you need the same several TB on the spare box. This equates to a non-trivial amount of money - initial capital investment plus ongoing storage management costs. (Hopefully a future release will alleviate this – while at Microsoft I invented and patented a mechanism for consistency checking a database in a backup without restoring it.)
  • If the consistency checks find an error, you don't know for sure that the database is corrupt on the production system. The only way to know for sure is to run a consistency check on the production system. This is a small price to pay though, because most of the time the consistency checks on the spare system will be ok, so you know the production database was clean at the time the backup was taken.
Summary
We have many choices to allow you to run consistency checks, so there's really no excuse for not knowing  that something's gone wrong with your database....Cyaa next time...

Wednesday 13 July 2011

Genesis of Sql server from 1989(creation) till date……..

Hiii friend’s m back with my technical post hope u all like it. Depending on my profession i thought to write something related to SQL server and keeping in mind that it's my 1st technical blog post so let’s pen down history of SQL server from creation-till date...

Microsoft SQL training is important to IT professionals interested in knowing how to work on the product. A history of Microsoft SQL server is also very important. Basically, the code for MS SQL came from the Sybase SQL Server, which was the first database Microsoft attempted. It competed against Sybase, IBM and Oracle. Then, Sybase, Microsoft, and Ashton-Tate worked together to create the first version of the SQL Server. It ended up being pretty much the same as the third edition of the Sybase SQL Server. Then, the Microsoft SQL Server 4.2 was available in 1992. When the 4.21 version was available it was at the same time as Windows NT 3.1. The first version of SQL that did not include any assistance from Sybase was the Microsoft SQL Server v6.0.

When Windows NT made an appearance Sybase and Microsoft moved on to pursue their own interests. This allowed Microsoft to negotiate exclusive rights to the versions of SQL that were written for Microsoft systems. The Sybase server actually changed its name to Adaptive Server Enterprise to keep it from being confused with the Microsoft version. Many revisions have been made without assistance from Sybase since the two parted ways. The first database server written on GUI was a complete change from the Sybase code.
In the ten years since release of Microsoft's previous SQL Server product (SQL Server 2000), advancements have been made in performance, the client IDE tools, and several complementary systems that are packaged with SQL Server 2005. Performance has been improved, complementary systems are now available with the system, and client IDE tools are included. Some of the new systems included are Analysis Services, ETL, and messaging technologies like notification services and service broker.
SQL Server 2005 (codename Yukon), released in October 2005, is the successor to SQL Server 2000. It included native support for managing XML data, in addition to relational data. For relational data, T-SQL has been augmented with error handling features (try/catch) and support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new indexing algorithms, syntax and better error recovery systems.

SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of allowing usage of database connections for multiple purposes.

SQL Server 2005 introduced DMVs (Dynamic Management Views), which are specialized views and functions that return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.

SQL Server 2005 introduced Database Mirroring, but it was not fully supported until the first Service Pack release (SP1).

After approx. 5 years release of Sql server 2005 the great SQL Server 2008 R2 (formerly codenamed SQL Server "Kilimanjaro") was announced at TechEd 2009, and was released to manufacturing on April 21, 2010.SQL Server 2008 R2 adds certain features to SQL Server 2008 including a master data management system branded as Master Data Services, a central management of master data entities and hierarchies. Also Multi Server Management, a centralized console to manage multiple SQL Server 2008 instances and services including relational databases, Reporting Services, Analysis Services & Integration Services.
SQL Server 2008 R2 includes a number of new services, including Power Pivot for Excel and SharePoint, Master Data Services, Stream Insight, Report Builder 3.0, Reporting Services Add-in for SharePoint, a Data-tier function in Visual Studio that enables packaging of tiered databases as part of an application, and a SQL Server Utility named UC (Utility Control Point), part of AMSM (Application and Multi-Server Management) that is used to manage multiple SQL Servers
The next release of SQL Server, code-named Denali, is right around the corner. This version of SQL is one of my favorite of all the releases which came with huge improvements when it arrived in 2011 with its CTP1 release. As mentioned it’s my favorite of all so I will get deep into its detailed feature in my next post.

Over all Summary of SQL evolution:


In 1988, Microsoft released its first version of SQL Server. It was developed jointly by Microsoft and Sybase for the OS/2 platform.

  • 1993 – SQL Server 4.21 for Windows NT
  • 1995 – SQL Server 6.0, codenamed SQL95
  • 1996 – SQL Server 6.5, codenamed Hydra
  • 1999 – SQL Server 7.0, codenamed Sphinx
  • 1999 – SQL Server 7.0 OLAP, codenamed Plato
  • 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0)
  • 2003 – SQL Server 2000 64-bit, codenamed Liberty
  • 2005 – SQL Server 2005, codenamed Yukon (version 9.0)
  • 2008 – SQL Server 2008, codenamed Katmai (version 10.0)
  • 2010 – SQL Server 2008 R2, Codenamed Kilimanjaro (aka KJ)
  • Next – SQL Server 2011, Codenamed Denali
Please provide all your suggestions based on the same. Finally thanks all readers for reading my first technical post & Feel free to comment your opinions and suggestions….






Tuesday 12 July 2011

My 1st Ever Blog Post......

Well, here it is, my 1st ever blog post on my own personal blog. I feel like I am sitting in a new car looking around going, "Wow, this is mine". 1st and foremost, I have to thank my sis n all my frnds who helped me to start off with this blog. I am truly grateful, and we will certainly learn more about database n data-ware housing  down the road.

All the usual rules apply.....
become a follower....post about almost every thing on your blog....and leave a comment

So, PLEASE, offer your comments, suggestions, (and most importantly) involvement into this blog post
take care and Thank you in advance for your support & everyone who have left me comments over the post i REALLY appreciate every one of them