Teradata DBQL Performance Tuning – Part 1- Setup

2
847

Teradata DBQL

The Teradata DBQL Tables

This is the introduction post of a series of posts about SQL tuning with DBQL tables.

The DBQL Tables are an important tool for the Teradata SQL tuner. The DBQL environment consists of several tables located in database DBC. Each table gives some specific insight into performance characteristics of queries which are executed on the system.

Here is a short overview about the main tables which are used in performance tuning:

DBQLOGTBL

DBQLOGTBL contains, for example, the following useful information for each query:

  • Query Start Time and Query End Time
  • Parsing Time of the Query
  • Error Codes
  • Total CPU Usage
  • Total IO Usage
  • Workload Delay Times
  • The Query Text (truncated)

Logging in DBQLOGTBL usually is activated on each Teradata System (although filter criteria may be applied, we will talk about this later).

DBQLSQLTBL

If logging is activated, this DBQL table will contain the full query text for each query. Sometimes it is necessary to identify our workload by parsing the query text. In this case, DBQLSQLTBL can be used.

DBQLOBJTBL

If activated, tracks the usage of all database objects, such as databases, tables, columns, indexes, etc.
We can, for example,  use this table to identify unused secondary indexes, to check the skew of used tables, etc.

DBQLSTEPTBL

This DBQL tables will log each query step (as seen in the Explain Statement). This includes the following information:

  • Query Step Timings (Start and End Time of each step)
  • Row Counts per Step
  • Activity of each Step (Join, Redistribution, etc.)
  •  CPU and IO usage for each step

This information gives valuable insight into each query and can be considered very important for performance tuning. Unfortunately, logging is often deactivated, as Teradata Admins fear the resource usage overhead created by logging into DBQLSTEPTBL.

DBQLSUMMARYTBL

This table if summary or threshold logging is activated. It contains aggregated response times, CPU time and IO usage. Not that useful to optimize single SQL queries.

DBQLEXPLAINTBL

This table holds the whole Explain Plan for each query.

Setting Up Query Logging

In my experience, by default, query logging should be set up for all objects. As a minimum recommendation I would activate DBQLOGTBL (Main Table), DBQLOBJLOG (The Object Log) and DBQLSQLTBL without any thresholds/filtering:

BEGIN QUERY LOGGING WITH SQL, OBJECTS LIMIT SQLTEXT=0 ON ALL;

Teradata logs this information in a very efficient way and the impact on system performance is negligible.

Logging for users, accounts, and applications which should not be logged in the default way described above, should be changed on demand. Here are two examples:

BEGIN QUERY LOGGING LIMIT THRESHOLD=100 CPUTIME AND SQLTEXT=5000 ALL ACCOUNT = ‘Account’;

BEGIN QUERY LOGGING LIMIT THRESHOLD=100 CPUTIME AND SQLTEXT=5000 ALL ON APPLNAME = ‘AppName’;

I would not recommend approaching this topic the other way around i.e. turning on query logging only on demand. All Teradata developers should have access to the useful information available in the DBQL tables. How else should they write performing statements?

In our next article, we will show you how performance indicators of the DBQL tables can be used to identify product joins, missing indexes, etc.

Further, we will show you some SQL statements you can use to classify your queries into “good” and “bad” ones.

Questions?
If you have any questions about all this, please ask in the comments! I’ll be paying close attention and answering as many as I can. Thank you for reading. Whatever this blog has become, I owe it all to you.
Our Reader Score
[Total: 13    Average: 4.2/5]
Teradata DBQL Performance Tuning – Part 1- Setup written by Roland Wenzlofsky on August 13, 2017 average rating 4.2/5 - 13 user ratings

2 COMMENTS

  1. Hi Roland,

    I’d like to add a couple of observations to this article.

    1) I also used to recommend “SQLTEXT=0” as a ‘standard’ logging option but a conversation with one of our customers has changed my mind on that. This customer uses “SQLTEXT=500” as a standard logging option. The actual value for SQLTEXT is irrelevant, the point being that it is > 0.

    My customer’s reasoning was that with some SQL recorded in the main log table he could display data from one query log table and get an idea of what the SQL was actually doing. No the SQL is not complete, but he said (and I’ve noticed this as well on many customer sites) that once you get used to the SQL that is running on a system, then usually you only need to see the first ‘part’ (and it is the size of the first ‘part’ – SQLTEXT – that will vary) to tell what the SQL is doing.

    2) In your article above you make no differentiation between the logging requirements or suggestions for a development / test system compared to a production system.

    I normally tell customer to ‘log everything’ on a development / test system and then to follow ideas similar to your above for a production system. I agree with you that DBQL logging has minimal overhead, to my mind the only significant overhead is disk space and that can (should!) be managed using something like PDCR.

    3) I agree with you that the approach should be ‘turn it on by default and manage exceptions as required’. One of the ‘exceptions’ that I always recommend to customers is the username used for the Viewpoint Data Collection queries. For these I recommend using Summary logging. i think you need to know about their CPU and IO usage for capacity planning, accounting etc., but given that here are ‘lots’ of queries per day, most of them are repeated many time and the customer can’t change them there is not point to me of having details logged for them. Just summarise them.

    Cheers,
    Dave

LEAVE A REPLY

Please enter your comment!
Please enter your name here