SQL Server database file size

Keeping track of database size and growth is a critical part of a DBA’s job which I wrote about in an earlier blog post. But now I have improved my scripts that reveal the status of all my db files (I say ‘files’ as each SQL Server database is made up of at least two files).
Many of my db’s are set to autogrow but I also have maximum sizes that I allow these to grow. This is important to me because I want to know about out of control database growth and not just let them fill up my storage. So I need to know, periodically, where my db’s are at. The 25% is arbitrary, you can pick any number that works for you. Also, if you have HUGE db’s you may need to adjust the decimal precision for the output.

Here’s my script:

--all dbs over 25% full with sizes too
select db.name as [db_name], mf.name,
'dbSizeinMB' =
cast (cast (mf.size*1.0/128 as decimal(9,2)) as nvarchar(30)),
'MaxSizeinMB' =
case mf.max_size
when 0 then 'no growth is allowed.'
when -1 then 'autogrowth is on.'
when 268435456
then 'log file will grow to a maximum size of 2 tb.'
else cast (cast (mf.max_size*1.0/128 as decimal(9,2)) as nvarchar(30))
end,
'PercentageSize' =
case mf.max_size
when 0 then 'no growth is allowed.'
when -1 then 'autogrowth is on.'
else cast (cast(((mf.size*1.0)/(mf.max_size*1.0))*100 as decimal(9,2)) as nvarchar(30))
end
from sys.master_files mf, sys.databases db
where mf.database_id = db.database_id and ((mf.size*1.0)/(mf.max_size*1.0)*100) > 25.0
order by db.name

Of course you don’t always have time to run a script to check up this so I have a maintenance plan set up with this content that e-mails be the report every day at 6AM:

--all db files over 25% full with sizes too
DECLARE @tableHTML NVARCHAR(MAX) ;

SET @tableHTML =
N'<H1>db file Percentage Size Check</H1>' +
N'<table border="1">' +
N'<tr><th>db_name</th><th>name</th>' +
N'<th>dbSizeinMB</th><th>MaxSizeinMB</th><th>PercentageSize</th></tr>' +
CAST ( ( SELECT td = db.name, '',
td = mf.name, '',
td = cast (cast (mf.size*1.0/128 as decimal(9,2)) as nvarchar(30)), '',
td = case mf.max_size
when 0 then 'no growth is allowed.'
when -1 then 'autogrowth is on.'
when 268435456
then 'log file will grow to a maximum size of 2 tb.'
else cast (cast (mf.max_size*1.0/128 as decimal(9,2)) as nvarchar(30))
end, '',
td = case mf.max_size
when 0 then 'no growth is allowed.'
when -1 then 'autogrowth is on.'
else cast (cast(((mf.size*1.0)/(mf.max_size*1.0))*100 as decimal(9,2)) as nvarchar(30))
end
FROM sys.master_files mf, sys.databases db
where mf.database_id = db.database_id and ((mf.size*1.0)/(mf.max_size*1.0)*100) > 25.0
order by db.name
FOR XML PATH('tr'), TYPE
) AS NVARCHAR(MAX) ) +
N'</table>' ;

EXEC msdb.dbo.sp_send_dbmail
@profile_name ='My Profile',
@recipients=N'me@provider.com;you@provider.com',
@subject = 'MyServer db files over 25% max size',
@body = @tableHTML,
@body_format = 'HTML' ;

This applies to SQL Server 2005 and newer.

Identifying High Row count tables in SQL Server – New and Improved

A big part of good database maintenance is having a fair idea of where your tables are at in terms of numbers of rows and primary keys. I took a stab at writing a generic script to return all the tables with their row counts with or without a primary key (PK) a few years back. I revisited that script and made some improvements. One of the features I added was a middle select that identifies the tables that have high turn over. This is identified, admittedly arbitrarily, by filtering for those whose row count is less than 10% of its PK value. In this way I can be alerted to those tables that I might want to resync so I don’t reach the 2 billion maximum for integer identities. It unerringly points to my ‘scratch’ tables that are generally wiped out after each use.


declare @tbltmpStats table (
Identity_Tables int identity (1, 1) not null ,
Table_Name nvarchar(200),
Rows int,
Identity_Max int)
declare @objectid int;
declare @objectname nvarchar(200);
declare @sql1 nvarchar(4000)
declare @objectcursor cursor

set @objectcursor=cursor for select o.object_id from sys.objects o
where type='u'
and objectproperty(o.object_id, N'TableHasIdentity')=1
and o.schema_id = 1
order by o.name
open @objectcursor
fetch next from @objectcursor into @objectid
while (@@fetch_status=0) begin
select @objectname=name from sys.objects where object_id=@objectid
set @sql1 = 'select ''' + cast(@objectname as nvarchar(200)) + ''', count(*), max(identitycol) from ' + @objectname
insert into @tbltmpstats (Table_Name, Rows, Identity_Max)
exec (@sql1)
fetch next from @objectcursor into @objectid
end
close @objectcursor
update @tbltmpstats set Identity_Max = 0 where Rows = 0

select Table_Name, Rows, Identity_Max,(case (Identity_Max) when 0 then Null else (cast(Rows as decimal (10,2)) / cast(Identity_Max as decimal (10,2))) end) as [Rows/Identity_Max]
from @tbltmpstats
order by Rows desc,Identity_Max desc

--(Rows / Identity_Max) less than 10%
select 'Candidate for resyncing Primary Key: ' as Watch, Table_Name from @tbltmpstats where Identity_Max > 0 and (cast(Rows as decimal (10,2)) / cast(Identity_Max as decimal (10,2)))<0.1

declare @tbltmpStats2 table (
NonIdentity_Tables int identity (1, 1) not null ,
Table_Name nvarchar(200),
Rows int
)
set @objectcursor=cursor for select o.object_id from sys.objects o
where type='u'
and objectproperty(o.object_id, N'TableHasIdentity')=0
and o.schema_id = 1
order by o.name
open @objectcursor
fetch next from @objectcursor into @objectid
while (@@fetch_status=0) begin
select @objectname=name from sys.objects where object_id=@objectid
set @sql1 = 'select ''' + cast(@objectname as nvarchar(200)) + ''', count(*) from ' + @objectname
insert into @tbltmpstats2 (Table_Name, Rows)
exec (@sql1)
fetch next from @objectcursor into @objectid
end
close @objectcursor
select Table_Name, Rows from @tbltmpstats2 order by Rows desc
deallocate @objectcursor

That, of course, begs the question why I’m not using temporary tables. An experienced DBA will tell you that there are individual answers to that question. I could say something like “Because!” or “It depends!” but if that’s still not clear then the following might give you a better insight into proper DBA attitude:
Dilbert.com