Turning on the Teradata Locking Logger
by Vasudeva Rao
First, we enable locking logger via the DBS console window or the cnsterm subsystem, if it is not enabled already.The setting does not take effect until the database is restarted. The following is the dbscontrol help output for the flag which controls locking logger:
9. LockLogger – This Field defines the system default for the locking logger. This allows the DBA to log the delays caused by database locks to help in identifying lock conflicts. To enable this feature set the field to TRUE. To disable the feature set the field to FALSE.
If LockLogger is already set to TRUE in DBScontrol, then this is not necessary and “dumplocklog” can be executed as explained further below.
The following is how to use the cnsterm subsystem from the UNIX command prompt on the PDN node (root privilege is required):
# cnsterm 6
> start dbscontrol
# cnsterm 1
> display general <=== if flag 9 (LockLogger) is already set to true, you do not need to modify it.
> modify general 9=true
If the LockLogger flag has been altered from ‘false’ to ‘true’, a database restart must occur for the change to take effect. There are no ill effects to having the flag set to true while waiting for a good time to restart the database. The following shows how to issue a TPARESET command from the UNIX command line and the cnsterm subsystem.
# cnsterm 6
> query state
TPA is in the state: Logons are enabled – The system is quiescent <=== verify there are no requests in progress which will require a long rollback
> restart tpa dump=no coldwait enable locking logger
# rallsh -sv /usr/pde/bin/pdestate
<< check for TPA >>>>
After a database restart with the LockLogger flag set to true, the Locking Logger will begin to accumulate lock information into a circular memory buffer of 64KB.
Depending on how frequently the system encounters lock contention, this buffer will wrap, but it will usually span several day periods. Following a period of lock contention, to analyze the lock activity, you need to run the dumplocklog utility which moves the data from the memory buffer to a database table where it can be accessed.
Using “dumplocklog” to retrieve information that was locked by LockLogger:
As with dbscontrol, dumplocklog is available at the DBS console and cnsterm. We show the procedure for cnsterm:
# cnsterm 6
> start dumplocklog
# cnsterm 1
Enter your logon string: <username,password> (Q/q to quit)
> systemfe, service
Do you want this utility to run continuously? (Y/N)
>n <=== see the note below regarding the possible impact of continuous operation.
Enter number of sessions (? For Help, Q/q to Quit):
>2 <=== see the note below regarding sessions
2 sessions will be logged on.
Enter the Character Set (? For Help, Q/q to Quit):
…You have chosen character set <ASCII >
Enter the table name where the lock log entries will be written (? For Help, Q/q to Quit):
Do you want this utility to create table “SYSTEMFE”.”LOCKLOG”? (Y/N)
Table “SYSTEMFE”.”LOCKLOG” has been created.
Enter the time constraint to control selection of lock log entries that were generated AT or LATER than this time YYYYMMDD HHMMSS
(? For Help, Q/q to Quit):
> 20001116 120000
Enter the time constraint to control the selection of lock log entries that were generated AT or PRIOR TO this time YYYYMMDD HHMMSS
(? For Help, Q/q to Quit):
> 2001117 235959
Enter the object constraint for selection of lock log entries (? For Help, Q/q to Quit):
> * # * for all objects or <database>.<tablename> for specific objects. Type ? for help on syntax options
> [return] Writing lock log entries to table “SYSTEMFE”.”LOCKLOG”.
Press <F2> any time to stop the utility.
### At this point it may take some time until a message like below appears.
> 2,616 rows have been inserted to table “SYSTEMFE”.”LOCKLOG”.
*** DumpLockLog is terminated ***
The data now resides in the database table you requested and allows all the query flexibility of any table. Please refer to the manual for the complete table definition. However, we offer the following as a complete, plain vanilla output from the table:
/* Generate a report based on the log output listing the users in lock conflict */
/* To find the number of outstanding locks for a given 10-minute period */
For more complex analysis, refer to the Teradata Utilities Manual: Chapter 2 – Locking Logger Utility, Producing a Lock Log Report.
A customer wants to activate Locking Logger during certain periods when they perceive potential locking issues. As an example, they want to have Locking Logger active and logging on Tuesday mornings from 7:00A to noon.
Outside of this period they want Locking Logger to be available but inactive (not logging). From their interpretation, they believe they have to do a TPA reset every time which is not acceptable to them. What is the recommended procedure?
The recommended procedure is to enable the background facility once (which will require a tpareset). Once this is done, the locking activity will be written to a 64K-byte circular buffer of about 1500 entries per amp.
We recommend that you simply let it do this because the performance impact is negligible. It is not worthwhile to attempt to stop the internal logging process. We have a large number of customers that run permanently in this fashion without any noted degradation in response time or throughput.
Using the output of the circular buffer, generate Locking Log reports either periodically or continuously. I do not recommend that this be done continuously.
In terms of session overhead and table space the continuous mode can have a significant impact (see also the related question below). The recommended approach is to invoke the report generator immediately following the analysis period and to specify a conservative number of sessions (start with 1 per node and adjust it).
When continuous mode is chosen the area that controls the number of sessions goes gray and unclickable. The value in that field is 1, so I am unclear why locking logger in continuous mode is using one session per vproc. Does Teradata know of a way to force locking logger to use only one session total in continuous mode?
We disabled the choice for selecting the number of sessions with the CONTINUOUS mode because by choosing the CONTINUOUS, the user basically lets LOCKING LOGGER takes over complete control.
The CONTINUOUS mode was originally designed similar to a background execution of the LOCKING LOGGER. The locking logger’s primary function is to take data from the lock buffer in memory and insert them into a table on disk. Because these inserts can be done in parallel, we thus allow multiple sessions to do the INSERT.
However these are not just any regular sessions, they are the so-called CONSOLE sessions. There is a limit of 6 console sessions per PE vproc. And depending on what’s running out there, sometimes there aren’t enough of these sessions per AMP (they get used up by utilities).
As the LOCKING LOGGER is just a utility, we cannot risk taking console sessions from other more important jobs. However, we don’t have the ability to adjust the number of sessions dynamically (i.e., once it is set), we have to choose a number based upon the situation at the activation time of the locking logger. This number is chosen such that it leaves enough sessions around for other utilities yet still allows sufficient parallelism for the locking logger’s insertion of rows.
We already know that this number might not make all users happy. And thus, we have advised the users NOT to use the CONTINUOUS mode if they could help it. Remember that the LOCKING LOGGER is a HISTORICAL recording tool, i.e., the data recorded could contain 6-month worth or 2-second worth, it does not justify running in CONTINUOUS mode.