Teradata SQL Tuning Evergreens


  1. Ensure Completeness and Correctness of Teradata Statistics

This is what I always teach in my performance Teradata SQL tuning workshops without ever disappointing my clients:

The most important optimization task is to give complete and correct statistics about tables and indexes (secondary indexes, join indexes, hash indexes, etc.).

Detecting missing statistics is easy. In the SQL Assistant executes the following statement:


Above statement turns on the optimizer’s statistic hints, and adds information about required statistics to the explain output.

At the end of the output, a list of missing statistics is shown.  These are the statistics the optimizer could use when building the execution plan. Add Statistics one by one, and re-check the execution plan after each change.

Several methods exist to detect stale statistics. The easiest way is to split the SQL statement and to test each part separately, by comparing the estimated row count (as shown in the explain output) with the real number of records being returned by the query.

This approach is particularly suitable if the SQL statement cannot be executed in a reasonable time. Here is one example:

SELECT t01.* FROM table_1 t01 INNER JOIN  table_2 t02 ON t01.PK = t02.PK
WHERE  t01.column_1 = 1 AND t02.column_a = 2;

Above query can be split into two parts for testing:

SELECT * FROM table_1 WHERE column_1 = 1;


SELECT * FROM table_2 WHERE column_a = 2;

Execute and EXPLAIN both queries, and compare the number of returned rows with the number of rows the explain output delivers.

If there is a big difference, it could be related to stale statistics or similar problems with data distribution.

  1. The Primary Index (PI) Choice

Whenever possible, design your queries in a way, that the Primary Index can be used in the joins. Being able to join on the complete set of Primary Index columns is the cheapest way of joining (apart from the nested join). The join condition can include other columns, which are not part of the Primary Index (they are applied as residual conditions), but missing Primary Index columns forbid the usage of a row hash match scan.

If you can’t change the Primary Indexes for whatever reason, or you would need a different Primary Index only for certain SQL statements, use a volatile table or a real temporary table with the same structure and content as the original table but with a different Primary Index.

  1. Teradata Indexing Techniques

The usage of indexes is another option to improve query performance. Secondary indexes are useful in highly selective WHERE conditions.

Although usually not considered indexing, you could try join indexes or work with partitioning. The difference between an index and partitioning is that indexes are sub-tables, partitioning is just another way of structuring the base table.

Whenever working with indexing techniques, you have to keep the overall data warehouse architecture in mind and decide if your solution fits into it.

If potentially useful indexes are not used by the Optimizer, and they are not helpful in the overall PDM design, drop them immediately. They only will waste space and resources.

More details about indexing you can find here

  1. Query Rewriting

Many times, queries performance can be improved by rewriting the query.

Examples of using DISTINCT instead of GROUP BY on columns with many values come to my mind.
Union statements could be used to break up a large SQL statement into several smaller pieces, which are executed in parallel.

Query rewriting often allows solving performance issues, even in the cases when all other techniques failed.

While query rewriting is a very powerful way of improving performance, it often requires understanding the business logic (“can this left join be replaced by an inner join?”). Usually, understanding the business logic of a query reveals other tuning options.

  1. Real Time Monitoring

Watch the query running in real-time. Observing a query in Viewpoint or PM on while it is executing, helps to detect the critical steps.

Most performance issues arise out of query steps involving skewing, or from a bad execution plan caused by stale and missing statistics.

Stale and missing statistics typically lead to wrong decisions in join preparation (copying a table to all AMP’s instead of rehashing) and join types used (product join instead of merge join).

  1. Comparison of Resource Usage

Another crucial task in performance optimization is to measure the resource usage before and after the optimization. Query run times are misleading, as they can be influenced by concurrent workload, blocking, etc.

Here is one example query, which gives you a detailed insight into how right or wrong each step of your query is. To run this query, you need select access to the DBC.DBQLOGTBL table.

Set a different QUERYBAND for each version of the query you are running:












     QUERYBAND = ‘Version=1;’

The query will return:

  • The total CPU Usage
  • The Spool Space needed
  • The LHR (ratio between CPU and IO usage)
  • The CPU Skew
  • The Skew Impact on the CPU

Our goal is to cut total CPU usage, consumed spool space and skewing on the CPU. Furthermore, the LHR is optimally around 1.00

Teradata Quiz
Teradata Quiz
Price: Free+
Our Reader Score
[Total: 42    Average: 4.1/5]
6 Golden Rules for Teradata SQL Tuning written by Roland Wenzlofsky on October 29, 2014 average rating 4.1/5 - 42 user ratings


  1. Hi Roland,
    Is it fine to collect stats on all columns recommended by DIAGNOSTIC command ?
    Is there any specific way to know as to which stats recommendations should be implemented and which ones to skip ?
    I generally collect stats on single column stats recommendations and skip the multi-column stats. But there have been instances where multi-column stats recommendations helped after implementation.


    • Not all recommended statistics are needed. I follow a similar approach. Especially when it comes to multi-column statistics we have to consider that their application is limited compared to single column stats. While the optimizer can use the first part of a statistic on columns (a,b) i.e. the “a”, it will not be able to use a query which, for example, has a WHERE condition only on column “b”. Of course, the ideal scenario for the statistic on columns(a,b) is a WHERE condition ob both columns.

      It’s similar to statistics on functions, such as on CASE statements. They are only useful for a very specific kind of query which matches exactly this function.

  2. INSERT INTO eitopsdb.GTT_srv_ft_data_grp_nbr_rk

    SELECT BASE.srv_accs_id
    FROM Eitopsdb.srvft_blstmt_usgvol_fltavg_rk BASE INNER JOIN
    EDWVIEWS.bl_stmnt_sub BSS
    BASE.srv_accs_id = BSS.srv_accs_id
    — WHERE bl_cyc_dt BETWEEN DATE -31 — insure one month ago
    — AND DATE -1 — Yesterday
    (PARTITION BY BSS.srv_accs_id
    ORDER BY BSS.bl_cyc_dt DESC, BSS.data_grp_nbr DESC)

    Could anybody help in tuning this query??

    It is failing with below error-

    *** Failure 3156 Request aborted by TDWM. Exception criteria exceeded: Spool Usage.
    Statement# 1, Info =0

    • Hi – The duplicate filtering should be taken care before participating in to join operation . Share the explain plan of this Query for more detail.

  3. This is a question which can’t be answered easily. New releases of Teradata has some helpful features like statistic extrapolation and comparision of random AMP sampling against collected statistic. This may help in reducing recollection intervals.

    There are different approaches which could be taken. The refresh interval definitely depends on how fast data changes. Usually I sneak some code into central components like the historization process. You could for example use the ACTIVITYCODE variable in BTEQ to get the number of new rows added to a table. This may be helpful in estimating the table growth and this insight can be used for a timely recollection of statistics without any overhead.

    Statistics collection is always a tradeoff between keeping the system performing and the overhead caused by the collection process.

  4. Roland,

    Should we collect stats every time a table gets refreshed ( either inserts or updates or both) ? Lets says we have some 500 tables in DWH which gets refreshed every day, in this case, do we have to take stats for all 500 tables every day ?

  5. Yes, partitioning definitely should be part of the golden rules.

    Regarding statistics, I am always struggling against showing the DIAGNOSTIC HELPSTATS statement. On the one hand it is a fast and comfortable way to analyze the statistics landscape, on the other hand I have seen many people starting to blindly add all recommendations after I showed them this possibility.

  6. From a pedagogical point of view, I would actually rank “statistics” at the end of this list.

    I consistently realize that statistics tend to be over-collected due to staff’s “low confidence” in pretty much all the other points you mention here. The magic of the DIAGNOSTIC often results in shifting the attention away from more fundamental design problems, which statistics are just intended to complete.

    Great post!


Please enter your comment!
Please enter your name here