
Hindsight: What Would You Do Differently If Your SQL Server Database Was Damaged?
If you are responsible for database administration on Microsoft SQL Server, it’s important to make sure your systems are protected against data loss. Sometimes you may face the situation that the backup is either damaged or simply doesn’t exist. This article discusses what can be done in such a case. First let’s take a look at how you can damage your database by making some common mistakes. Then I’ll show how you can quickly recover from these problems without recreating your entire database…
A Few Common Mistakes That Can Damaged Your Database Files and How to Avoid Them!
Consider the following scenarios: –
- You decide to run DBCC CHECKDB or another maintenance task against 30GB-sized production databases using an account with basic permissions only.
- You forget to drop a temporary table after use. Or you may have the same temporary object in different databases, which are created by different users with different permissions.
- You enable auto shrink on your user database. This causes the shrink operation to run automatically every time 20 percent of free space is reached, and it can affect performance badly or even cause index corruption.
- After adding a new index in production, you decide that it is not necessary and drop it (and all its clustered/non clustered indexes) without making sure they are marked as unused.
- You make changes against read only databases, such as apply SQL Server patches or add new users. There may be cases when the DBA is absent for some period of time during the day, and the read-only flag gets removed by another person responsible for SQL Server (i.e. server administrator).
- You enable full-text catalogs on user databases with basic permissions only.
- You run DBCC SHRINKDATABASE without a prior manual shrink of a database file first. This may cause severe fragmentation depending upon how much data was in the file before shrinking.
- Now that you have seen several scenarios that can damage your SQL Server databases, let’s take a look at what you should do after these events to restore your system as easily as possible…
What to Do In The Case Of Database File Damage?
There are two options: –
1) You have a damaged database and you also have a backup (or you can make one). You can restore the database and transaction log (truncating it after the operation, if needed) and apply the transactions from your backup. This way your data will be consistent with no lost transactions.
2) You don’t have a backup, but you at least know which file is damaged or which files need to be restored. You’ll need to put these files together into a full database backup (using bcp or disk cloning tools, for example); then you can restore this new backup as usual.
When restoring the database with recovery mode set to SIMPLE, both options are available to you; however, if it’s set to FULL mode, option 1 is not available. So after a file damage, make sure to check the recovery model of your database and change it if needed.
When To Use Options 1 And 2?
- Option 1 is great for simple damages when you have a backup. In this case nothing advanced from the file system point of view is required, and you can do it easily on your own by running some Transact-SQL script or using some other tool (such as bcp).
- Option 2 is useful in more complicated scenarios where there are damaged files which need assembling/restoring first so they can be used to build a new full backup. This type of scenario requires deeper knowledge of Windows OS internals and data recovery tools such as disk cloning utilities or special programs that can read and write raw disk sectors (such as R-Studio, for example).
For the next two scenarios we’ll use a database called “testdb” and three file groups: “data”, “indexes” and “backup”. The database was created with option SIMPLE.
1) A table in db_datareader database role is dropped with the CASCADE option: –
USE master; GO –Check that testdb allows db_datareader/users permissions to its data files ALTER DATABASE testdb SET RECOVERY SIMPLE; GO USE testdb; GO DROP TABLE dbo.TestTable CASCADE; go SELECT name , type_desc , state_desc , size , free_space FROM sys.database_files;
USE master; GO EXEC sp_changedbowner ‘dbo’, ‘db_owner’; go
2) Shutdown SQL Server service, run chkdsk on the drive where the database data file is located and restore it to another location –
3) Modify one of your user databases permissions to cause autoshrink to be executed by adding this line in SSMS Object Explorer under Databases -> testdb -> Properties -> Options: AUTO_SHRINK ON (if it is not present or set to OFF). Then restart SQL server service: –
4) Add an index in a user database without marking its clustered/nonclustered status: USE testdb; GO CREATE NONCLUSTERED INDEX idx_nc1 ON dbo.TestTable(col2) ; go SELECT name , type_desc , state_desc , size , free_space FROM sys.database_files;
Conclusion:
In the first two cases, you could restore your data from a backup because those are simple file damage scenarios. The index creation case is a little more advanced and should be done by a skilled DBA as it involves issues such as repairing fragmented indexes.