TOP 7 TFS Storage Problems – Part 1

Posted by Ryan Riehle - August 01, 2012

header-picture

Running out of space on your TFS system?

I'm constantly working with clients to implement and troubleshoot their TFS systems, and I see common configuration problems and practices that can cause servers to perform poorly or run out of disk space. I started writing a “quick” post about this, and it turned out to be a bigger topic than I expected; so, I'm making this a multi-part series that will cover what I consider to be all the top storage configuration practices you can use to address common problems we see in the field and around the Internet:

  1. SQL Server files are taking up a lot (if not all) the space on the server.
  2. The Team Project Collection database is growing quickly because of the testing features (TFS 2010 and later only)
  3. Large binary files under version control.
  4. Tracing wasn’t turned off after troubleshooting TFS
  5. Poor planning and stingy server admins
  6. Unmanaged output from automated builds
  7. SSAS (Tfs_Analysis) cube size not managed

#1 Problem - SQL Server files taking up a lot (if not all) the space on the server.

Not every TFS administrator is knowledgeable about SQL Server, so unfortunately dealing with SQL Server Recovery Models and Transaction Logs can be confusing – especially for the “accidental” TFS admin. When SQL Server backups are not being handled properly, a SQL Server can quickly run out of space and it is actually a common problem. Hopefully, the tips here can help you quickly resolve the problem if you are finding this during an out-of-space emergency and/or help you avoid such an emergency in the future.

Please keep in mind that this is not the only reason why your server may be out of disk space - there are other possible causes and resolutions that I will cover soon in future posts, but this is certainly the most common cause.

If your server is failing because it is already out of space, correct this immediately by backing up your transaction logs. If you are backing up your databases once a day or less, then you should ensure that the Recovery Model in the properties of your databases are set to SIMPLE (more on this later).

Firstly, you can use the following SQL query to view a list of the transaction logs for each of your databases to see how full they are:

DBCC SQLPERF(logspace)

To shrink the size of your transaction log(s), you can run the following SQL statements for each database that is hogging up all that disk space using the following statements; However, please keep in mind that your transaction log grew to it’s current size because it wanted the space, and every time it grows it take a performance hit; therefore, don’t shrink your transaction log too much:

Method 1:

-- Set database to SIMPLE Recovery Mode
ALTER DATABASE Your_Database_Name SET RECOVERY SIMPLE
GO
-- Shrink the log file to it's default/initial size
DBCC SHRINKFILE(Your_Database_Name>)
GO
-- Show updated information about your database
EXEC sp_helpdb Your_Database_Name
GO
-- Set you database back to FULL Recovery Mode. This is optional, but recommended.
-- If you are only performing database backups once a day or less, you should skip this step.
ALTER DATABASE Your_Database_Name SET RECOVERY FULL
GO

Method 2:

-- Please Note: You may need to repeat these steps depending on the condition
-- of your database; If the transaction didn't shrink to where you expect it
-- to after running these steps, then run it again.

-- Backup your transaction log. This will leave the file the same size on disk
-- but will free up space inside of it:
BACKUP LOG TO DISK = N'File_Path_Where_to_Store_the_Backup'
GO
USE Your_Database_Name
GO
-- This will shrink your database file to the initial/default size.
DBCC SHRINKFILE(Your_Database_Name_log)
GO
-- Now perform a full backup of your database
BACKUP DATABASE Your_Database_Name TO DISK = N'File_Path_Where_to_Store_the_Backup'

For more information, check out the following articles:
http://support.microsoft.com/kb/907511
http://msdn.microsoft.com/en-us/library/ms189493.aspx
http://msdn.microsoft.com/en-us/library/ms178037(v=sql.105).aspx

Oh – and don't get any bright ideas about shrinking your DATA files, as this can severely fragment your database, making it really slow.

Make sure you regularly back up your database and familiarize yourself with the different recovery modes (FULL vs. SIMPLE) (http://msdn.microsoft.com/en-us/library/ms175987(SQL.105).aspx). If your DBs are in FULL RECOVERY MODE (the default mode), then this implies that you plan to take transaction log backups after you backup your database. A regular FULL database backup will not backup your transaction log. If you only care to restore your database back to the moment when you took your last full backup, then it is better to set your recovery mode to SIMPLE. Otherwise, if you leave it set to FULL, taking transaction log backups frequently will keep them them under control and enable you to restore your database to various points in time that occur after the last FULL database backup. For more information on managing the size of your transaction logs, you can reference the following article: http://msdn.microsoft.com/en-us/library/ms365418(v=sql.105).aspx

If you have TFS 2010 or later, then you can install the TFS Power Tools on your TFS server, these power tools contain a backup power tool that integrates with your TFS 2010 Administration Console and can set up full backups for you according to best practices; However, we find that unless you have a team with 100’s of TFS users or a really busy server, taking daily FULL backups with frequent Transaction Log backups (every 15-30 mins) is going to work fine for you in case there is a disaster.

TFS Power Tools for TFS 2010: http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f

Don’t Store your backups on the SQL Server

I also see people running out of space on their server because they are storing SQL Server backups to the local drive of the SQL Server hosting their TFS data. This should be avoided; it is bad practice, because it not only takes up space on that machine, but it also increases your risk tremendously; if the drive on the SQL Server fails then you’ll be facing problems if you want to use your backups to recover it. Therefore, back up your databases to a network share or even better, to tape.

Topics: Blog


Recent Posts

InCycle Named Azure Data Explorer (ADX) Partner

read more

OpsHub & InCycle Help Top Medical Device Company Accelerate Innovation

read more

InCycle Continues to Lead, Recognized by Microsoft for Industry Innovation. Earns Impact Award.

read more