Functions on Join Columns and Their Impact on Teradata Performance
In many Teradata systems, developers apply functions directly in join conditions to work around data-model inconsistencies.
While this approach might seem harmless, it can dramatically affect optimizer decisions and query performance — and often reveals deeper data-model issues.
Example of a Problematic Join
Applying functions to join columns prevents Teradata from using existing statistics or hash access paths:
SELECT A.column1, A.column2, B.column1, B.column2
FROM tableA A
JOIN tableB B
ON SUBSTRING(A.join_column, 1, 3) = B.join_column;
Why this is slow:
TheSUBSTRING()
expression onA.join_column
disables the use of row-hash access and collected statistics on that column, forcing a full-table scan on both sides.
What Is the Effect of Functions on Join Performance?
Functions such as SUBSTR()
, TRIM()
, COALESCE()
, CASE
, and even certain LIKE
patterns hinder the optimizer’s ability to apply statistics.
Without usable histograms, the optimizer must rely on heuristics, often mis-estimating join cardinalities and causing inefficient join plans.
Rule of thumb:
Any transformation that changes the column value before comparison hides its true distribution from the optimizer.
What If I Cannot Avoid Functions?
Ideally, fix the data model so that columns are comparable without transformation.
If that’s not immediately possible, consider these mitigations:
- Derived Column:
Add a stored column containing the transformed value (LEFT(join_column,3)
), and join on that instead. - Expression Join Index:
Pre-compute the expression in a Join Index so that the optimizer can still use statistics. - Volatile Table Normalization:
As a last resort, preprocess data into a volatile table with normalized join keys before joining.
These are workarounds — not substitutes for a clean model.
Other Disadvantages of Functions on Join Columns
Most functions force a full-table scan because the join condition no longer maps to any hash access path.
This increases I/O, CPU, and spool usage across all AMPs.
Even when statistics exist, they can’t be leveraged once the join expression changes the column’s semantics.
Functions With Minimal Impact
Some functions allow partial optimization.
For example:
SELECT * FROM Customer WHERE LastName LIKE 'Wen%';
If the pattern starts with a literal and ends with %
, Teradata can still use histogram boundaries for filtering.
In contrast, using:
WHERE SUBSTR(LastName,1,3) = 'Wen'
would disable those histograms and trigger a full scan.
Conclusion
The underlying problem is almost always a flawed data model.
Using functions on join columns hides normalization issues, confuses the optimizer, and inflates resource consumption.
Investing in proper model design yields far greater long-term performance than any SQL-level workaround.
The Snowflake Perspective
Snowflake behaves differently but faces a similar principle:
Expression-based joins also limit optimization.
- Snowflake’s optimizer cannot use micro-partition pruning when join predicates involve functions such as
SUBSTR()
orTRIM()
; all affected micro-partitions must be scanned. - Because Snowflake is columnar and compute-elastic, the penalty is often smaller — but the extra scans still increase cost and latency.
- The solution is identical in spirit: materialize normalized keys or preprocess data before joining.
In both systems: fixing the data model removes the need for function-based joins.
Migration Guidelines: Teradata → Snowflake
Concept | Teradata | Snowflake | Migration Tip |
---|---|---|---|
Function on Join Column | Disables statistics & row-hash access | Breaks micro-partition pruning | Pre-compute normalized join key before load |
Optimizer Behavior | Falls back to full-table scan | Scans all partitions touched by expression | Materialize join key or use staging view |
Workarounds | Derived column, Join Index, volatile table | Derived column, materialized view | Apply transformation during ELT |
Root Cause | Denormalized / inconsistent model | Same issue; compute hides it | Correct the model in source layer |
Key Takeaway:
Avoid functions in join predicates — they cripple optimization in Teradata and weaken pruning in Snowflake.
Pre-normalize data and keep join columns function-free to ensure both platforms perform at their best.
Hi Roland,
Functions on join columns should be avoided only in the join statement? Or throughout the entire SQL statement?
I’m asking from a statistics usage perspective.
If my join is clean but I’m doing a function on join column while projecting it in the SELECT part, does the optimizer still not use column’s statistics?
Regards,
Virendra
Only in the join part. The select list doesn’t matter.
Still, if you can avoid functions on your WHERE conditions it’s an advantage.
Since Teradata 14.10 you can apply statistics on functions, still, they are most likely only useful for exactly the one SQL you designed them for. On the other hand, simple column statistics are normally useful in many queries.
If no statistics can be used, heuristics are applied, which can be completely wrong as they assume a fixed amount of rows being returned.