Teradata Locking LoggerTurning on the Teradata Locking Logger
by Vasudeva Rao

First, we enable locking logger via the DBS console window or the cnsterm subsystem, if it is not enabled already. The setting does not take effect until the database is restarted. The following is the dbscontrol help output for the flag which controls the locking logger:

9. LockLogger – This Field defines the system default for the locking logger. This allows the DBA to log the delays caused by database locks to help in identifying lock conflicts. To enable this feature set the field to TRUE. To disable the feature set the field to FALSE.

If LockLogger is already set to TRUE in DBScontrol, then this is not necessary, and “dumplocklog” can be executed as explained further below.

The following is how to use the cnsterm subsystem from the UNIX command prompt on the PDN node (root privilege is required):

# cnsterm 6
> start dbscontrol

[break] <=== use the key defined as intr (interrupt a.k.a. break) in the output of stty -a   ( Control Key + C)

# cnsterm 1
> display general <=== if flag 9 (LockLogger) is already set to true, you do not need to modify it.

[ output follows]

> modify general 9=true
> write
> quit;


If the LockLogger flag has been altered from ‘false’ to ‘true’, a database restart must occur for the change to take effect. There are no ill effects to having the flag set to true while waiting for a good time to restart the database. The following shows how to issue a TPARESET command from the UNIX command line and the cnsterm subsystem.

# cnsterm 6

> query state

TPA is in the state: Logons are enabled – The system is quiescent <=== verify there are no requests in progress which will require a long rollback

> restart tpa dump=no coldwait enable locking logger

# rallsh -sv /usr/pde/bin/pdestate
<< check for TPA >>>>

After a database restart with the LockLogger flag set to true, the Locking Logger will begin to accumulate lock information into a circular memory buffer of 64KB.

Depending on how frequently the system encounters lock contention, this buffer will wrap, but it will usually span several day periods. Following a period of lock contention, to analyze the lock activity, you need to run the dumplocklog utility, which moves the data from the memory buffer to a database table where it can be accessed.

Using “dumplocklog” to retrieve information that was locked by LockLogger:

As with dbscontrol, dumplocklog is available at the DBS console and cnsterm. We show the procedure for cnsterm:

# cnsterm 6
> start dumplocklog

# cnsterm 1
Enter your logon string: <username,password> (Q/q to quit)
> systemfe, service

Do you want this utility to run continuously? (Y/N)

>n <=== see the note below regarding the possible impact of continuous operation.

Enter number of sessions (? For Help, Q/q to Quit):
>2 <=== see the note below regarding sessions
2 sessions will be logged on.

Enter the Character Set (? For Help, Q/q to Quit):
> 127

…You have chosen character set <ASCII >
Enter the table name where the lock log entries will be written (? For Help, Q/q to Quit):
> locklog

Do you want this utility to create table “SYSTEMFE”.”LOCKLOG”? (Y/N)
> y

Table “SYSTEMFE”.”LOCKLOG” has been created.

Enter the time constraint to control selection of lock log entries that were generated AT or LATER than this time YYYYMMDD HHMMSS

(? For Help, Q/q to Quit):
> 20001116 120000

Enter the time constraint to control the selection of lock log entries that were generated AT or before this time YYYYMMDD HHMMSS

(? For Help, Q/q to Quit):
> 2001117 235959

Enter the object constraint for selecting lock log entries (? For Help, Q/q to Quit):

> *     # * for all objects or <database>.<tablename> for specific objects. Type ? for help on syntax options

> [return] Writing lock log entries to table “SYSTEMFE”.”LOCKLOG”.

Press <F2> any time to stop the utility.

### At this point, it may take some time until a message like below appears.

> 2,616 rows have been inserted to table “SYSTEMFE”.”LOCKLOG”.

*** DumpLockLog is terminated ***

The data now resides in the database table you requested and allows all the query flexibility of any table. Please refer to the manual for the complete table definition. However, we offer the following as a complete, plain vanilla output from the table:

/* Generate a report based on the log output listing the users in lock conflict */

.set width 65531
.export file locklog.out
begdate (format ‘mm/dd’) (named “Date”)
,begtime (format ’99:99:99′) (named “Time”)
,delay (format ‘999:99:99.9’) (named “Delay”)
,a.username (format ‘X(18)’) (named “BlockingUser”)
,trim(dbase.databasename) ||’.’||trim(tvm.tvmname) (named “DBandTable”)
,stmttype  (named “StatementType”)
,processor ,blkingsessno (format ‘z(9)999’) (named “BlockingSession”)
,blkinglevel (named “BlockingLevel”)
,blkingmode (named “BlockingMode”)
,b.username (format ‘X(18)’) (named “BlockedUser”)
,blkdsessno (format ‘z(9)999’) (named “BlockedSession”)
,blkdlevel (named “BlockedLevel”)
,blkdmode (named “BlockedMode”)
,multipleblocker (named “MultipleBlocker”)
/* change FROM clause, so it points to your lock log table */
left outer join dbc.tv on  tid = tvm.tvmid
left outer join dbc.dbase on   dbid = dbase.databaseid
left outer join dbc.eventlog a on
(blkingsessno  = a.sessionno and   blkingloghost = a.logicalhostid and   begdate   <= a.datefld and   begdate  >= a.logondate and a.event = ‘logoff’)
left outer join dbc.eventlog b on  (blkdsessno  = b.sessionno and   blkdloghost = b.logicalhostid and   begdate  <= b.datefld and begdate >= b.logondate and b.event     = ‘logoff’)
order by 1,2,3 ;
.export reset

/* To find the number of outstanding locks for a given 10-minute period */

SELECT begdate(TITLE ‘Date’)
,((begtime / 1000)(INTEGER)) * 1000(FORMAT’99:99:99′)(TITLE ‘Time’)
,COUNT(*) (TITLE ‘# locks’)
FROM locklog

For more complex analysis, refer to the Teradata Utilities Manual: Chapter 2 – Locking Logger Utility, Producing a Lock Log Report.

Additional Info/Comments:


A customer wants to activate Locking Logger during certain periods when they perceive potential locking issues. For example, they want to have Locking Logger active and logging on Tuesday mornings from 7:00A to noon.

Outside of this period, they want Locking Logger to be available but inactive (no logging). From their interpretation, they believe they have to do a TPA reset every time, which is not acceptable. What is the recommended procedure?


The recommended procedure is to enable the background facility once (which will require a tpareset). Once this is done, the locking activity will be written to a 64K-byte circular buffer of about 1500 entries per amp.

We recommend that you let it do this because the performance impact is negligible. It is not worthwhile to attempt to stop the internal logging process. We have many customers who run permanently in this fashion without any noted degradation in response time or throughput.

Using the output of the circular buffer, generate Locking Log reports either periodically or continuously. I do not recommend that this be done continuously.

In terms of session overhead and table space, the continuous mode can significantly impact (see also the related question below). The recommended approach is to invoke the report generator immediately following the analysis period and to specify a conservative number of sessions (start with 1 per node and adjust it).


When the continuous mode is chosen, the area that controls the number of sessions goes gray and unclickable. The value in that field is 1, so I am unclear why the locking logger in continuous mode is using one session per vproc. Does Teradata know how to force the locking logger to use only one session total in continuous mode?


We disabled the choice for selecting the number of sessions with the CONTINUOUS mode because by choosing the CONTINUOUS, the user lets LOCKING LOGGER takes over complete control.

The CONTINUOUS mode was originally designed similar to a background execution of the LOCKING LOGGER. The locking logger’s primary function is to take data from the lock buffer in memory and insert them into a disk table. Because these inserts can be done in parallel, we thus allow multiple sessions to do the INSERT.

However, these are not just any regular sessions, they are the so-called CONSOLE sessions. There is a limit of 6 console sessions per PE vproc. And depending on what’s running out there, sometimes there aren’t enough of these sessions per AMP (they get used up by utilities).

As the LOCKING LOGGER is just a utility, we cannot risk taking console sessions from other more important jobs. However, we can’t adjust the number of sessions dynamically (i.e., once it is set). We have to choose a number based upon the situation at the activation time of the locking logger. This number is chosen such that it leaves enough sessions around for other utilities yet still allows sufficient parallelism for the locking logger’s insertion of rows.

We already know that this number might not make all users happy. And thus, we have advised the users NOT to use the CONTINUOUS mode to help it. Remember that the LOCKING LOGGER is a HISTORICAL recording tool, i.e., the data recorded could contain 6-month worth or 2-second worth. It does not justify running in CONTINUOUS mode.

Detect system overload with Teradata Locking Logger 1
Buy now at Amazon
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

You might also like