In this article we will describe the auditing potential of SQL Server’s default trace, how the trace is implemented and what it contains. Then we will show you how to directly access data from the trace file for auditing and troubleshooting. Finally we will outline a more complete auditing solution built on a custom trace.
About the default trace
SQL Server’s default trace was introduced in SQL 2005 to supply data for performance and auditing reports available through the Management Studio Object Browser.
For example, the Schema Change History report is populated from default trace data. You can access this report by right-clicking a database in Object Browser, selecting Reports from the menu then selecting Schema Change History.
The default trace can be disabled by changing the configuration option Default Trace Enabled to zero, but that is the only control you have over it. You may not add or delete events or data columns to the default trace as you can in a normal trace.
The default trace is composed of five 20 megabyte trace files that are accessed in a round robin fashion, When the first file is full, the trace moves to the next file in the sequence. When the fifth file is full,the first file is deleted and a new file is created.
The trace files are located in the log subdirectory of SQL Server’s install directory. If you installed to the default directory, the path will look something like this:
C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\log_xxx.trc
Using the default trace for auditing and troubleshooting
Even though you cannot change the trace to capture different data, you are able to query the trace file and specify just the data you want to see, filtering out the rest. This is made possible by the function fn_trace_gettable. This function allows you to query one or more trace files as if they were a single table. The function can query both active and inactive trace files.
fn_trace_gettable takes two parameters, the path to the trace file and an integer value representing the number of rollover files to include. When you are querying the default trace, the number of files is always 5.
The trace does not store the name of the events monitored, only the event id. In order to get easily readable output you need to join fn_trace_gettable to sys.trace_events to get the name of the event. Here is a sample function call that will return the default trace data from my instance of SQL Server 2008. You will have to adjust the path and the trace file name to suit your environment:
- select e.name, t.*
- from fn_trace_gettable( ‘C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\log_716.trc’ , 5 ) as t
- join sys.trace_events e on e.trace_event_id = t.EventClass
This query returns all the rows in the trace with all the column information. This is not particularly useful because of the quantity of data returned. However fn_trace_gettable can be used any way that a table can be used. You can filter results with a WHERE clause. You can limit the data returned by specifying only the columns you want to see. If you want, you can periodically select data into a permanent table to create a history of the events of interest to you.
Let’s look at a simple use case: Shortly after 3pm , the cpu on your sql server jumps suddenly to 98%. No scheduled jobs are running and you can’t find anything out of the ordinary going on. You suspect that a developer may have changed a stored procedure in the application database. You run this query to find out if any objects have been altered since 3pm.
- select e.name
- , t.starttime
- , t.DatabaseName
- , objectname
- , LoginName
- , t.HostName
- , t.ApplicationName
- , t.spid
- , t.eventclass
- from fn_trace_gettable( ‘C:\Program Files\Microsoft SQL Server\
- MSSQL10.MSSQLSERVER\MSSQL\Log\log_716.trc’ , 5 ) as t
- join sys.trace_events e on e.trace_event_id = t.EventClass
- where ObjectName is not null
- and t.EventClass = 164 — object altered
- and starttime > ‘2010-06-15 15:00:0.000’
- order by t.StartTime
You find that a stored procedure had been altered in the database just after 3pm. You have the developer roll back the procedure to its previous version. The cpu usage drops to a normal range.
As you can see, querying the default trace for troubleshooting purposes is very useful and there are special cases where it can be used for ongoing auditing of specific events, but you will run into countless frustrations if you attempt to use the default trace for comprehensive auditing.
Events in the default trace were selected to provide data for specific puposes. Important auditing events were left out and other events not particularly useful for auditing were included.
Another drawback of the default trace is that there are only 100 megabytes of the most recent data available to you. In a busy environment, 100mb of data can be generated in a very short time. Unless you set up a method of archiving the trace data, you may not have an audit trail going back far enough to be useful.
Auditing with a custom trace
The ability to selectively pull data out of any trace file using fn_trace_gettable opens up the possibility of doing more serious auditing with a custom trace.
You should be aware of the limitations to auditing via a trace. The most important limitation is the load put on the SQL Server by a second trace running while the default trace is also running. However, if your custom trace is created in TSQL code rather than using the Profiler GUI, that will reduce the load on the server significantly. We don’t have space in this article to describe how to execute a trace using TSQL code but here is a good article on the subject.
If a minimum set of events and columns are chosen carefully, that will further reduce the load as well as the disk space requirements. You might even consider filtering on the IsSystem column where IsSystem = 0. This will filter out a lot of rows produced by internal system operations. These are often not relevant to a security audit but you should make your own decision about that.
Selecting Events
Depending on your specific requirements, you might be able to use just the events in the Objects group and the Security Audit group of events. Although login data can be useful on some occasions, it is sometimes too expensive. Some applications generate hundreds of logins per second. Audit Login Failure events are normally far less frequent and from a security auditing perspective they are of more interest than successful logins .
Audit Schema Object Access also produces a lot of rows, but it is very useful for security auditing. If it is possible to include it without affecting performance, I would do so. This event fires each time a permission is exercised on a database object. For example, it will record when a specific user accessed the CreditCard table. It will identify the type of access, e.g. select, update, delete and it will give you the text of the code that was executed. It will also tell you if the attempt to access the object was successful. This means that you may not need to audit TSQL and SP executions if you include Audit Schema Object Access. This will significantly reduce the load the trace puts on the server.
Putting it all together
A successful auditing policy is composed of two elements. The first is to capture the necessary data. The second element is to understand what the collected data means and transform it into useful information.
Of the two, the second is the more difficult. It is complicated in this case by the fact that different events in a trace use the data columns differently. Every event populates data only into a subset of the available columns of the trace. The rest of the columns in that row are null. So, to query data intelligently you must know which columns are relevant for a given event and what the data in them means.
If you search a trace event name on the web you will find a Microsoft page that gives you a detailed description of both the event and the column data associated with it. See an example page
However each page of documentation has the details for only one event. There are 4,304 valid combinations of events and columns that you can include in (or exclude from) a trace. The query below will give you a list of the populated columns for any event or events you select. If you remove the WHERE clause, you will get all 4,304 permissable combinations.
- SELECT c.name as ‘Category’, e.name as ‘Event’, col.name as ‘Column’
- FROM sys.trace_events e
- JOIN sys.trace_event_bindings b ON e.trace_event_id = b.trace_event_id
- JOIN sys.trace_columns col ON b.trace_column_id = col.trace_column_id
- join sys.trace_categories c ON e.category_id = c.category_id
- WHERE e.name in (‘Audit Schema Object Access Event’)
- ORDER BY c.name, e.name, col.name;
This is faster than searching the web for documentation when you are writing queries to extract data from a trace file.
In Review
Knowing how to use fn_trace_gettable to extract information from a trace file gives you access to a great deal of information in the default trace for troubleshooting problems or auditing changes. However, the default trace falls short of being a complete security auditing tool. If you want to run a more comprehensive security audit, you must create a custom trace that includes all the relevant events. sometimes an acceptable audit can be done using only the events in the Objects and Security Audit event groups. .
When auditing you must consider the load your trace is putting on the server. Try to minimize that load as much as possible.
-
- Create and run the trace using TSQL instead of the SQL Profiler GUI.
- Use only the events and columns necessary to capture the information of interest to you. Even if an event is of use , it may not be necessary to include all of the available columns for that event.
- Consider filtering out system events using a filter on the IsSystem column
- Consider other filters that will minimize the number of rows collected.