How to Optimize Joins in Teradata: A Case Study

How to optimize Joins in Teradata

This case study demonstrates the potential to enhance query performance through minor modifications to the query text. The initial SQL statement references two tables with primary indexes of PK, one of which is a volatile table, although this distinction is insignificant in the context of the example. Furthermore, one of the tables is partitioned.

SELECT *
FROM
Table1 t01
LEFT OUTER JOIN
Table2 t02
ON
t02.PK = (CASE WHEN t01.TheCode <> 'R' THEN t01.PK END)
AND DATE '2016-02-29' BETWEEN t02.RSD AND t02.RED
AND t02.TSS = TIMESTAMP '9999-12-31 23:59:59'
;

While executing the query, it became skewed due to the join condition, which I found immediately problematic.

t02.PK = (CASE WHEN t01.TheCode <> 'R' THEN t01.PK END)

Using functions in join conditions is not advisable and should be avoided as much as possible. The presence of functions in joins may indicate issues with the data model, such as violating the first normal form (1NF).

Join conditions with functions have several drawbacks.

  • The optimizer will not have useful statistics available for your expression.
  • No AMP-local merge join (direct merge join) can be done, even if both tables have the same primary index (as in our example).

A further drawback in our test case becomes apparent when we substitute the join condition into the following SELECT statement:

SELECT
CASE WHEN TheCode <> 'R' THEN PK END AS Func,COUNT(*)
FROM Table1
GROUP BY 1
ORDER BY 2 DESC;

Func                                          COUNT(*)
NULL                                        3.438.058
2.000.186.886.985                                 1
2.000.191.425.938                                 1
2.000.179.133.299                                 1
2.000.195.702.747                                 1
2.000.174.021.594                                 1
2.000.190.591.042                                 1

Most join condition outcomes will be NULL, resulting in redistribution to the same AMP.

The query results in a high skew.

Here is the execution plan. Table2 must be redistributed based on the hash code of our complex join expression.

Typically, most rows are relocated to a specific AMP (generally, the one with the most data stores the NULL values).

The hash join that follows is skewed.

Explain SELECT *
FROM Table1 t01
LEFT OUTER JOIN Table2 t02
ON t02.PK = CASE WHEN t01.TheCode <> 'R' THEN t01.PK END
AND DATE '2016-02-29' BETWEEN t02.RSD AND t02.RED
AND t02.TSS = TIMESTAMP '9999-12-31 23:59:59'
1) First, we lock TheDatabase.Table22 in view
TheDatabase.Table2 for access.
2) Next, we execute the following steps in parallel.
1) We do an all-AMPs RETRIEVE step from
MyUser.t01 by way of an all-rows scan with
no residual conditions into Spool 2 (all_amps) (compressed
columns allowed) fanned out into 39 hash join partitions,
which is redistributed by the hash code of ((CASE WHEN
(MyUser.t01.TheCode <> 'R') THEN
(MyUser.t01.PK) ELSE (NULL) END
)(BIGINT)) to all AMPs. The size of Spool 2 is estimated
with high confidence to be 12,380,063 rows (5,434,847,657
bytes). The estimated time for this step is 21.92 seconds.
2) We do an all-AMPs RETRIEVE step from TheDatabase.Table22
in view TheDatabase.Table2 by way of an all-rows scan
with a condition of ("(TheDatabase.Table22 in view
TheDatabase.Table2.RSD <= DATE '2016-02-29') AND
((TheDatabase.Table22 in view
TheDatabase.Table2.RED >= DATE '2016-02-29') AND
((TheDatabase.Table22 in view
TheDatabase.Table2.TSS = TIMESTAMP '9999-12-31
23:59:59.000000') AND (TheDatabase.Table22 in view
TheDatabase.Table2.VERSION = 0 )))") into Spool 3
(all_amps) (compressed columns allowed) fanned out into 39
hash join partitions, which is built locally on the AMPs.
The size of Spool 3 is estimated with no confidence to be
21,632,828 rows (3,893,909,040 bytes).  The estimated time
for this step is 6.79 seconds.
3) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of an
all-rows scan, which is joined to Spool 3 (Last Use) by way of an
all-rows scan. Spool 2 and Spool 3 are left outer joined using a
hash join of 39 partitions, with a join condition of ("PK =
(( CASE WHEN (TheCode <> 'R') THEN (PK) ELSE (NULL) END ))").
The result goes into Spool 1 (group_amps), which is built locally
on the AMPs. The size of Spool 1 is estimated with no confidence
to be 12,380,063 rows (7,985,140,635 bytes). The estimated time
for this step is 8.22 seconds.
4) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 30.15 seconds.

Fortunately, it is straightforward to rewrite the query to prevent data redistribution and skew:

SELECT *
FROM
Table1 t01
LEFT OUTER JOIN
Table2 t02
ON
t02.PK = t01.PK
AND DATE '2016-02-29' BETWEEN t02.RSD AND t02.RED
AND t02.TSS = TIMESTAMP '9999-12-31 23:59:59'
AND t01.TheCode <> 'R'

The updated query uses a simple sliding-window merge join on the primary index of the tables, eliminating the need for a join expression and avoiding data redistribution.

Here is the plan:

Explain SELECT *
FROM
Table1 t01
LEFT OUTER JOIN
Table2 t02
ON
t02.PK = t01.PK
AND DATE '2016-02-29' BETWEEN t02.RSD AND t02.RED
AND t02.TSS = TIMESTAMP '9999-12-31 23:59:59'
AND t01.TheCode <> 'R'
1) First, we lock TheDatabase.Table22 in view
TheDatabase.Table2 for access.
2) Next, we do an all-AMPs JOIN step from
MyUser.t01 by way of a RowHash match scan with
no residual conditions, which is joined to TheDatabase.Table22
in view TheDatabase.Table2 by way of a RowHash match scan with
a condition of ("(TheDatabase.Table22 in view
TheDatabase.Table2.RSD <= DATE '2016-02-29') AND
((TheDatabase.Table22 in view TheDatabase.Table2.RED >=
DATE '2016-02-29') AND ((TheDatabase.Table22 in view
TheDatabase.Table2.TSS = TIMESTAMP '9999-12-31
23:59:59.000000') AND (TheDatabase.Table22 in view
TheDatabase.Table2.VERSION = 0 )))").
MyUser.t01 and TheDatabase.Table22 are
left outer joined using a sliding-window merge join, with
condition(s) used for non-matching on left table (
"((MyUser.t01.TheCode > 'R') OR
(MyUser.t01.TheCode < 'R')) AND (NOT
(MyUser.t01.PK IS NULL ))"), with a join
condition of ("TheDatabase.Table22.PK =
MyUser.t01.PK").  The result goes into
Spool 1 (group_amps), which is built locally on the AMPs. The
size of Spool 1 is estimated with no confidence to be 12,380,063
rows (7,985,140,635 bytes).  The estimated time for this step is
12.17 seconds.
3) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1. The total estimated time is 12.17 seconds.

Here is a summary of the improvements in resource utilization.

QueryStartEndRowsIOsCPUTimeSpool
New Version05.04 11:3805.04 11:3912379740268.383,00136,525.150.474.240,00
Old Version05.04 11:4005.04 11:4612379740437.771,00178,465.152.128.000,00

We decreased disk I/O operations by approximately 40% and lowered CPU utilization by 25%.

See also:
Teradata Query Rewriting & Partition Elimination

Related Services

⚡ Need Help Optimizing Your Data Platform?

We cut data platform costs by 30–60% without hardware changes. 25+ years of hands-on tuning experience.

Explore Our Services →

📋 Considering a Move From Teradata?

Get a personalized migration roadmap in 2 minutes. We have migrated billions of rows from Teradata to Snowflake, Databricks, and more.

Free Migration Assessment →

📊 Data Platform Migration Survey

Help us map where the industry is heading. Results are public — see what others chose.

1. What is your current data platform?

2. Where are you migrating to (or evaluating)?

Migrating FROM
Migrating TO

Thanks for voting! Share this with your network.

Follow me on LinkedIn for daily insights on data warehousing and platform migrations.

Stay Ahead in Data Warehousing

Get expert insights on Teradata, Snowflake, BigQuery, Databricks, Microsoft Fabric, and modern data architecture — delivered to your inbox.

1 thought on “How to Optimize Joins in Teradata: A Case Study”

  1. >>”It is never a good idea to apply functions on join conditions. This should be avoided whenever possible.”

    So true, and so often neglected…

    Cheers.

    Carlos.

    Reply

Leave a Comment

DWHPro

Expert network for enterprise data platforms. Senior consultants, project teams built for your challenge — across Teradata, Snowflake, Databricks, and more.

📍Vienna, Austria & Jacksonville, Florida

Quick Links
Services Team Teradata Book Blog Contact Us
Connect
LinkedIn → [email protected]
Newsletter

Join 4,000+ data professionals.
Weekly insights on Teradata, Snowflake & data architecture.