Teradata Tuning – Query Rewriting

Roland Wenzlofsky

December 8, 2015

minutes reading time


Teradata Tuning with Query Rewriting

In this article, we will learn how to improve the performance of SQL statements by rewriting them. This is a scenario from one of my recent projects.

The following SQL statement showed a high runtime and therefore had to be analyzed in more detail:

SELECT
t01.PK
,t01.TheDate
,t01.COL1
,t01.COL2
...
,t01.COL59
,t01.COL60
,CASE WHEN t02.PK IS NULL THEN 'NEW' ELSE 'CHANGED' END AS Type_Indicator
FROM
TheTable t01
LEFT JOIN
TheTable t02
ON
t01.PK = t02.PK AND
AND PK2.TheDate <> DATE'9999-12-31'
WHERE
PK1.TheDate = DATE'9999-12-31'
GROUP BY 1,2,3...,59,60;

The table from which we select has the following structure:

CREATE TABLE TheTable
(
PK INTEGER NOT NULL,
TheDate DATE FORMAT 'YYYY-MM-DD',
COL1 VARCHAR(10),
..
COL60 VARCHAR(255)
) PRIMARY INDEX(PK);

We are selecting from a table holding one record per object and reference date (columns “PK” and “TheDate”). The table contains about 150 million rows and an extensive number of columns.

The query’s goal is to select the present rows of each object (which can be identified by the date ‘9999-12-31’) and to display if there is at least one historical row. We use the column “Type_Indicator” to differentiate between objects with history and objects without history (“OLD” or “NEW”)

If we briefly analyze the SQL statement, we will see that it correctly delivers the above-described result. The problem is that this query returns the correct result but takes 2 hours.

We will first analyze the query in its individual parts to identify the weaknesses in the execution plan.

As we do a self-join, Teradata returns each recent row several times (each time when it matches a historical row). A GROUP BY statement is needed to get rid of the duplicates.
Although the join is done AMP-local (as the join column matches the Primary Index), we have a costly redistribution step for the GROUP BY statement. Furthermore, the spool has to be sorted by all 60 columns to be able to do the grouping.

Costs are increasing rapidly with the number of table rows and columns being grouped.

Now that we know the query’s weak points let’s analyze how we could improve it.

In the original query, we are joining all historical rows against the current row to detect if at least one historical row exists. We can do this in a better way:

SELECT
t01.PK
,t01.TheDate
,t01.COL1
,t01.COL2
...
,t01.COL59
,t01.COL60
, CASE WHEN t02.PK IS NULL THEN 'NEW' ELSE 'CHANGED' END AS Type_Indicator
FROM
TheTable t01
LEFT JOIN
(
SELECT DISTINCT PK
FROM  TheTable
WHERE TheDate <> DATE'9999-12-31'
) t02
ON
t01.PK = t02.PK AND
WHERE
PK1.TheDate = DATE'9999-12-31';

In the optimized query, we moved the comparison logic into a subquery, which aggregates only the primary key column. Everything else stays the same.

There is one significant advantage the new query has: 

Although it involves an aggregation step (DISTINCT) and the same AMP-local join step, the aggregation is done AMP-local (no redistribution is needed). The not optimized query required a costly redistribution step. Furthermore, only one column has to be sorted. In our old query, the AMPs had to sort by 60 columns, which is much more expensive.

Performance tuning should always be accomplished by some measures to prove the success. Here they are:

Disk IOs old: 1,878,357
Disk IOs new:  163,532

We were able to cut disk IOs by almost 91%!

Similarly, the situation is for the required spool space:

Spool Space old:  approx. 16.3 GB
Spool Space new: approx. 3.3 GB

We reduced spool usage by approx. 80%!

I hope our tuning example proved how slight changes in a query could have an enormous impact on resource usage.

  • Hi Ronald

    I modified your query and removed Col1 … Col60 and the Group by Clause from end for simplicity.
    So the updated queries look like :

    /*-=======Original Query==========- */
    SELECT
    t01.col_pk
    ,t01.col_date
    ,CASE WHEN t02.col_pk IS NULL THEN ‘NEW’ ELSE ‘CHANGED’ END AS Type_Indicator
    FROM
    vt_test t01
    LEFT JOIN
    vt_test t02
    ON
    t01.col_pk = t02.col_pk
    AND t02.col_date DATE ‘9999-12-31’
    WHERE
    t01.col_date = DATE ‘9999-12-31’;

    /*-=======Tuned Query=========- */

    SELECT
    t01.col_pk
    ,t01.col_date
    ,CASE WHEN t02.col_pk IS NULL THEN ‘NEW’ ELSE ‘CHANGED’ END AS Type_Indicator
    FROM
    vt_test t01
    LEFT JOIN
    (SEL DISTINCT col_pk FROM vt_test WHERE col_date DATE ‘2999-12-31’
    )t02
    ON
    t01.col_pk = t02.col_pk
    WHERE
    t01.col_date = DATE ‘9999-12-31’;

    Now when I compare the explain plan for both the queries I see the Tuned one actually has 5 steps however in the original query there are just 2 steps. Also, the estimated time was very less for the Original query compared to the Tuned query. If we just do collect stats on the original query won’t it help? Since the Tuned query has 5 steps and involves 4 spool creation where as Original query has just 2 steps and 1 spool is created for end result. My guess is Original query should be better than Tuned.
    Also please share more information on below two points:

    Disk IOs old: 1,878,357
    Disk IOs new: 163,532

    Spool Space old: approx. 16.3 GB
    Spool Space new: approx. 3.3 GB

    How did you calculate it? Is it by getting the values from DBQL and adding IO and SpoolUsage for the entire session? Or is it comparison at step level?

    My assumption of removing Col1…Col60 and removing Group by clause might have changed the explain plan for both the queries. Hence I may be wrong. Kindly correct me.

    Cheers
    Nitin

  • Hi Roland,

    As i see, the main objective of above sql is to set active_indicator = ‘NEW’ for current product (which can be identified by the largest date ‘9999-12-31’) else ‘CHANGED’.

    this can be also be done as

    SELECT
    t01.PK
    ,t01.TheDate
    ,t01.COL1
    ,t01.COL2

    ,t01.COL59
    ,t01.COL60
    ,CASE WHEN t01.date = ‘9999-12-31’ THEN ‘NEW’ ELSE ‘CHANGED’ END AS Type_Indicator
    FROM
    TheTable t01

    Why we need join in the first place..?
    What if new record is not identified by static value but is different of all products (versioned data)..?

    Thanks

    • The idea of this query is to check if any historical records exist, but the attribute values of the current one (with data ‘9999-12-31’) should be selected. It’s only checking if history records exist. If yes, then we identify the current records as UPDATE otherwise it’s NEW.

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    You might also like

    >