Teradata Golden Tuning Tipps 2017

The goal of SQL tuning is to cut resource usage. Two measures we have to watch: Disk IOs and CPU seconds. These are absolute values. They are not influenced by any concurrent system activity and stay stable for each execution plan.

Don’t use execution times as your optimization target. Many irrelevant factors will affect run times:

Session blocking, workload delays, heavy concurrent workload, etc.

The most expensive task for any RDBMS is to move data from the mass storage devices to memory. Many of the techniques & ideas described below reduce the number of transferred data blocks (from disk to memory and vice versa).

Some of them help to cut CPU consumption.

Ensure Completeness and Correctness of Teradata Statistics

The most important optimization task is to aid the Teradata Optimizer with complete and correct statistics.

We have to pay particular attention to 3 basic situations and always collect full statistics:

  • Non-indexed columns used in predicates:
    Missing statistics force the Optimizer to do a very inaccurate heuristic estimation.
  • Skewed indexed columns: Random-AMP sampling on a skewed index result in wrong estimates and execution plans.
  • Small tables:  Estimations for small tables suck when there are fewer table rows than AMPs.  In this case,  most AMPs will not contain rows. A random-AMP sample taken from an AMP without or just a few rows will give a wrong estimation.

By following these three rules, many problems related to missing statistics will vanish.

The goal of our tuning activities is to find a right balance between good query plans and the time needed to make sure useful statistical information.

Discovering  Missing Statistics

The most simple way to locate missing statistics is by turning on diagnostics:

DIAGNOSTIC HELPSTATS ON FOR SESSION;

Above statement adds statistics recommendation  at the end of each explained statement:

EXPLAIN
SELECT * FROM <Table> WHERE <Column> = ‘value’;

I would test each recommendation separately. Don’t add all recommendations at once, but wisely chose the ones which improve query performance.

Ensure that no plan step only has “no confidence.”.  Steps with “no confidence” are a sure sign for heuristics on non-indexed columns. Something must avoid at all costs!


Detection of Stale Statistics

The Optimizer is doing an excellent job in detecting stale statistics and extrapolating.

Before release 14.10, Teradata detected table growth by comparing two random-AMP samples. One taken during statistics collection, the other during query execution.

SinceTeradata Release 14.10, deleted and inserted rows can be tracked (the “UDI” counts). For extrapolation, the Optimizer will prefer UDI counts over rand-AMP sample comparison.

Still, the best way to ensure up to date statistics is to collect them often.

If we need to find stale statistics, we have to compare estimations against the real row count. There are two places where we can use this information:

  • The Execution Plan (by putting the EXPLAIN modifier in front of our query)
  • The Statistics Histogram (by executing the SHOW STATISTICS VALUES ON <TABLE> statement)

The statistics histograms give us the same information the Optimizer has. It’s used to create the execution plan:

  • Timestamp of the last statistics collection
  • Collected & Random-AMP Sampling  row counts
  • Inserted and deleted rows since the last statistics collection
  • Maximum column values
  • Minimum column values
  • Biased column values
  • Break up the SQL into smaller pieces (using volatile tables)
  • Get rid of joins on expressions

Why did I say “almost the same information”? Statistics improve each time the Optimizer gets new insight. They can come from earlier join steps, single table predicates, and aggregations.

Furthermore, different methods of extrapolation will adjust estimations shown in the histograms. Furthermore, various methods of extrapolation will correct estimates shown in the histograms.

If you need the estimations after extrapolation you can use the following statement:

SHOW CURRENT STATISTICS VALUES ON <TABLE> statement).

Here is a “single table predicate” example. It demonstrates the usage of derived estimations:

SELECT * FROM <Table1> t01 INNER JOIn t02 ON t01.Key = t02.Key WHERE t01. IN (1,2,3);

The retrieve step estimation for <Table1> is 3 distinct values. This information flows into the joining step.

The retrieve step estimation for <Table1> is 3 distinct values. This information flows into the joining step. If the average number of rows per value for both tables is about 1 (unique data), the resulting spool for the join is three rows.

A straightforward approach to detect stale statistics is this:

  • Decompose the query  into single retrieve and join steps
  • Figure out why step estimations and real numbers don’t match
    Statistics could be outdated, or they could be missing. It might even be that the estimation is a result of the way the optimizer works.
    Here is an example:SELECT * FROM <Table> WHERE <Column> = 1; — Value 1 is not available in <Table>As value=1 is not available in <Column> the Optimizer will estimate the result set to be the average rows per value. I guess many of you expect zero rows?

More information is available here: Teradata Statistics Basics

My last piece of advice:

Keep a copy of both the execution plans – before and after the change – to see the impact of your change.

The Primary Index Choice

When it comes to Primary Index choice, we must create a fair balance, weighing opposing requirements against each other:

Even data distribution and join performance.

Two tables can only be joined if they have the same primary index. In other words: The rows of both tables need to be on the same AMP.

Design your queries in a way, that the Primary Index is used for joining as much as possible, as this is the cheapest way of joining.

AMP-local joining also is possible if the join condition includes columns which are not in the primary index. But if the join condition is not including all primary index columns, the rows of one or both tables have to be relocated (and maybe sorted).

If you need a different primary index to improve performance, you can use volatile tables or temporary tables. Create them with the same structure and content as the original tables but with the needed primary index. The use of temporary tables is particularly useful if your query consists of many join steps.

Teradata Indexing & Partitioning

Indexes give the Optimizer more data access paths. They are improving highly selective retrieve actions. Unfortunately, indexes consume permanent disk space, and they require maintenance when the underlying base table changes.

Recommendation:

If indexes are not used by the Optimizer, and not useful in the PDM design, drop them immediately. They only will waste space and resources.

Unique secondary index (USI) allows for direct row access (like the primary index). The non-unique secondary index (NUSI) requires a full index subtable scan on all AMPs.

The NUSI can be an advantage over the base table access if the index subtable is smaller than the base table. Covering NUSI’s are more useful than non-covering ones:  There are base table lookups needed, and no costs created.

The difference between an index and partitioning is that indexes are sub-tables. Partitioning is another way of structuring the base table. Partitioning allows the Optimizer to limit access to the data blocks of a partition.

The advantage of partitioning is that partition elimination always is used. Index usage has preconditions. For example, the NUSI will not be used without statistics collected on the index columns.

Still, there is a significant advantage of indexing:

We create the index; we check if it’s used. We drop it if it’s not used. Have you ever partitioned a 200 Terabyte table for test reasons? I guess this is not what we like to do.

Another disadvantage of partitioning comes to my mind:

Join performance worsens if partitions of both tables don’t match or one of the tables is not partitioned.

Whenever working with partitioning, you have to keep the data warehouse architecture in mind. Decide if your solution fits into it. Partitioned tables use different join techniques. If tables have different partitions, this has an adverse impact on join performance!

Conclusion: There is no one-size-fits-all method. You have to check what works best for you!

Query Rewriting

Often queries performance improves when the query is rewritten. Here are some ideas:

  • DISTINCT instead of GROUP BY depending on the number of different values
  • UNION ALL instead of UNION (getting rid of a sort step)
  • Splitting a skewed query into two parts. One for handling the skewed values, the other to deal with the rest
  • Adding WHERE conditions to allow for partition elimination
  • Removing of unreferenced columns and expression from the select list (could help to cut joins)
  • Converting outer joins into inner joins

Query rewriting allows improving performance, sometimes even when all other techniques fail.

Query rewriting is a very powerful way to improve performance. Still, it often requires understanding the business logic (“can I replace this left join with an inner join?”).

It is possible to rewrite a query in a purely technical way. Still,  understanding the business logic of a query reveals more tuning opportunities.

See also our article on query rewriting here: Teradata Tuning – Query Rewriting

Physical Data Model Design

I was not sure if I should add the physical data model to this list for one reason. Often we can’t do any significant changes on the physical database design. Too many consumers on top of the database (such as reporting tools, data marts) would require a redesign.

Still, if we can improve the physical model, this is one of the most productive changes.

The best advice I can give:

Keep your core model normalized, denormalize in your data marts. Many of you will not agree. I can live with this. My experience is that early denormalization causes bad performance, especially when done without apparent reason.

Check all columns of all tables if the information stored cannot be further broken down. Ensure also that they have the same data types and character sets. 
 
Columns which are containing more than one piece of information force the user to use expression joins. Most likely the Optimizer will not use any statistics and the primary index for joining.

Ensure that all primary key columns are defined as NOT NULL. Add default values where appropriate. If you store codes, there is no reason to use UNICODE. It will just waste space.

Apply Multi Value compression on all tables:

More rows can fit into each data block. Data blocks are the smallest unit transferred between disk and memory. More rows per data block lead to less disk IOs and better performance.

Please consider that above advises can only be a loose collection of ideas about how to fix a broken data model.

 

Make Use of Teradata-Specific Features

There are several unique optimization opportunities which you should consider:

  • Using MULTISET tables can decrease Disk IOs
  • Use Multi-Statement-Requests, as this avoids the usage of the transient journal and does block optimization
  • Use CLOB columns instead of VARCHAR if you seldom select the column. Teradata stores CLOBs in sub-tables
  • DELETE instead of DROP tables
  • Use ALTER TABLE instead of INSERT…SELECT into a copy
  • User MERGE INTO instead of INSERT & UPDATE

 

 

Real Time Monitoring

Observing a query while it’s running allows detecting the critical steps. Most people use Viewpoint for real-time monitoring. I prefer another tool called dbmon. The author is a guy from Teradata Austria (I hate the slowness of Viewpoint).

Bad SQL performance is either caused by:

  • Skewed steps or
  • Stale and missing statistics. They fool the Optimizer into creating wrong join decisions. Product joins instead of merge joins, duplicating instead of rehashing

That’s my way of real-time monitoring:

I wait for a step which is either skewed or in which the estimated input rows to a join don’t make sense. (such as having two spools with millions of rows joined with a product join). concentrate my optimization to steps of previously mentioned type.

If skew is the issue, I will analyze the join column value skew. If estimations are wrong, I will go over the statistics and make sure that they are up to date.

Issues with statistics can be fixed quickly. Skew issues can be quite stubborn. Query rewriting is always my last option unless I find something foolish and easy to repair.

Comparison of Resource Usage

Always measure resource usage before and after the optimization. As I said earlier: query run times are no reliable test!

Here is a SQL query you can use in your daily work to extract appropriate steps from the query log. You have to set a different QUERYBAND for each query version you are running to be able to distinguish them. You need “select” access to “DBC.DBQLOGTBL”.

SET QUERY_BAND = ‘Version=1;’ FOR SESSION;

SELECT

   AMPCPUTIME,

   (FIRSTRESPTIME-STARTTIME DAY(2) TO SECOND(6)) RUNTIME,

   SPOOLUSAGE/1024**3 AS SPOOL_IN_GB,

   CAST(100-((AMPCPUTIME/(HASHAMP()+1))*100/NULLIFZERO(MAXAMPCPUTIME)) AS INTEGER) AS CPU_SKEW,

   MAXAMPCPUTIME*(HASHAMP()+1) AS CPU_IMPACT,

   AMPCPUTIME*1000/NULLIFZERO(TOTALIOCOUNT) AS LHR

FROM

   DBC.DBQLOGTBL

WHERE

     QUERYBAND = ‘Version=1;’

The query will return:

  • The total CPU Usage
  • The Spool Space needed
  • The LHR (ratio between CPU and IO usage)
  • The CPU Skew
  • The Skew Impact on the CPU

The goal is to cut total CPU usage, consumed spool space and skew.

Tactical Workload

Tactical workload requires a very particular skillset. The best tuner will fail if he doesn’t recognize that he is dealing with a tactical workload. I have seen complete projects failing because developers ignored this important fact.

I strongly recommend to read this post which explains all the details:

Tactical Workload Tuning on Teradata

Questions?
If you have any questions about all this, please ask in the comments! I’ll be paying close attention and answering as many as I can. Thank you for reading. Whatever this blog has become, I owe it all to you.
Our Reader Score
[Total: 11    Average: 4.3/5]
Teradata Golden Tuning Tipps 2017 – Take your Skills to the next Level! written by Roland Wenzlofsky on January 28, 2017 average rating 4.3/5 - 11 user ratings

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here