April 8


How to get SQL Query Stats in Teradata

By Nitin Srivastava

April 8, 2014

ampcputime, dbqlogtbl, spoolspace, sql

Several parameters can help us in understanding SQL Query Performance in Teradata.
I consider AMPCPUTime, TotalIOCount, SpoolUsage as three main parameters to determine SQL Query performance.
Say, you are executing multiple queries in Teradata sequentially. You might think the query which took most time is weak, but this may not be true for all the cases. However, if you refer to above mentioned three parameters to decide worst query, you will be correct for most of the cases.
There are two tables in DBC which give us this required information: DBQLOGTBL and DBQLSQLTBL.
To get SQL Query Stats, you can use below mentioned query:

SUBSTR(TB2.SqlTextInfo,1,1000) AS SqlTextInfo
TB1.QueryID = TB2.QueryID
TB1.ProcID = TB2.ProcID

You can add or remove columns per your requirement. However, the ones highlighted are important parameters for determining any Query Performance in Teradata.
If the AMPCPUTIME is high, you have to tune your query to make sure it perform well.

Three points to consider while running the query mentioned above:

a) You may not see results immediately after running your SQL queries. There is few minute delay when query information comes to DBQL tables.

b) The query mentioned above may take some time to give output. The reason behind it is the ‘NOT SO PROPER’ Index columns for these two tables. When we check the PRIMARY INDEX columns for both the tables, we observe that the PI is same. Both the tables have ProcID, CollectTimeStamp as PI, however, the value for CollectTimeStamp can be different for the same query in both the tables. Hence joining by the second column in not advisable. Therefore, you cannot leverage PI ultimately here hence the query may take some time for giving results.

c) To get the SessionID, just run SEL SESSION; command in the same session in which you are running your queries.

So now on, never say that query which took the maximum time is the worst. Fetch the Query DBQL stats and check the worst query yourself.

Nitin Srivastava

Nitin Srivastava holds engineering degree in Computer Science. He has 5+ years of experience in Teradata SQL Development and Query Optimization. He has worked for Telecom,Health Care, Banking Clients across the globe.

You might also like

  • Query logging has indeed to be turned on, but as far as I know, the cache is automatically flushed regularly, or am I wrong on this?


    • FROM The docs:

      Before Teradata Database writes DBQL data to Data Dictionary tables, it holds the data in cache until either:
      1.The cache is full.
      2.You end query logging.
      3.The number of seconds has elapsed that you define in the DBS Control utility field DBQLFlushRate.
      Note: You can select a rate from 1 to 3,600 seconds.However,Teradata recommends a flush rate of at least 10 minutes (600 seconds), which is the default. Less than 10 minutes can impact performance.
      4.You flush the DBQL cache manually.


  • You will need to start query logging then end it to flush the cache, or you will get nothing using your query


  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    Never miss a good story!

     Subscribe to our newsletter to keep up with the latest trends!