What are the Goals of SQL Tuning?

Resource consumption must be minimized to optimize a database system, and three crucial metrics to consider are skew, IOs, and CPU seconds. Our experience has shown that Teradata faces significant challenges with IOs and skew. Transferring rows from mass storage to main storage is a time-intensive process, as Teradata can only transfer data blocks containing a designated number of rows instead of individual rows. Thus, data blocks must be stored in the main memory to locate the required rows.

Systems heavily reliant on the CPU may face challenges due to high CPU usage. However, I have rarely encountered this limitation in my experience.

Our optimization metrics include IOs, CPU time, and skew. While the runtime is essential for our users, optimizing queries based on runtime is impractical due to its dependence on external parameters beyond our control, especially concurrent workloads. Therefore, we will improve our selected metrics, ultimately enhancing runtimes.

These metrics are advantageous due to their absolute nature. The consumption of resources for a specific query will consistently remain the same for each execution plan.

1. Ensure Completeness and Correctness of Teradata Statistics

To optimize Teradata, accurate and comprehensive statistics must be provided to support its optimizer.

We must meticulously evaluate three fundamental situations and consistently collect extensive data.

  1. Non-indexed columns used in predicates:

    Missing statistics force the Optimizer to do a wildly inaccurate heuristic estimation.
  2. Skewed indexed columns:

    Random AMP sampling on skewed index results in wrong estimates and execution plans.
  3. Small tables:

    Estimations for small tables suck when there are fewer table rows than AMPs. In this case,  most AMPs will not contain rows. A dynamic-AMP sample taken from an AMP without a few rows will give a wrong estimation.

Adhering to these three guidelines can resolve numerous issues about absent data.

We optimize Teradata SQL performance by efficiently balancing query plans and acquiring statistical data.

Discovering  Missing Statistics

Enable diagnostics to uncover missing statistics.

DIAGNOSTIC HELPSTATS ON FOR SESSION;

Each statement includes a statistically-supported recommendation.

EXPLAIN
SELECT * FROM  WHERE Column = 'value';

We must individually assess each recommendation and only adopt ones that improve query efficiency. We do not integrate all proposals at once.

Avoid any plan steps utilizing “no confidence” as it implies heuristics on unindexed columns, which should be avoided at all costs.

Detection of Stale Statistics

The Optimizer accurately projects and identifies outdated statistics.

Teradata identified table expansion in version 14.10 by contrasting two arbitrary AMP samples – one gathered during statistical analysis and the other during inquiry execution.

Teradata’s Release 14.10 enables UDI count tracking for deleted and inserted rows. As a result, the Optimizer gives UDI counts higher priority than random AMP sample comparison.

Frequently collecting statistics is the most effective way to ensure up-to-date statistics.

We must compare our estimates to the row count to determine which tables require statistics updates. This data can be utilized in two manners:

  • The Execution Plan (by placing the EXPLAIN modifier in front of our query)
  • The Statistics Histogram (by executing the SHOW STATISTICS VALUES ON <TABLE> statement)

Histograms inform the Optimizer and aid in the creation of the execution plan.

  • The timestamp of the last statistics collection
  • Collected & Random-AMP Sampling  row counts
  • Inserted and deleted rows since the last statistics collection
  • Maximum column values
  • Minimum column values
  • Biased column values
  • Break up the SQL into smaller pieces (using volatile tables)
  • Get rid of joins on expressions

Why did I say “almost the same information”? The statistics improve with every new insight the Optimizer acquires, which can come from prior join steps, individual table predicates, or aggregations.

Different extrapolation techniques can modify the estimates shown in the histograms.

For extrapolated estimates, please refer to the following statement:

SHOW CURRENT STATISTICS VALUES ON <Table>;

This is an example of a single table predicate that uses estimated derivations.

SELECT * FROM <Table1> t01 INNER JOIn <Table2> t02 ON t01.Key = t02.Key WHERE t01.Column IN (1,2,3);

The estimation for the retrieving step of <Table1> yields three distinct values that are used in the joining process. If both tables have an average of one row per value (unique data), the join results in a spool of three rows.

Stale statistics can be detected using a straightforward method:

  • Decompose the query  into single retrieve and join steps
  • Figure out why step estimations and real numbers don’t match
    Statistics could be outdated, or they could be missing. It might even be that the estimation results from how the Optimizer works.
    Here is an example:
SELECT * FROM  WHERE Column = 1;

Value 1 is missing from the <Table>. Therefore, the Optimizer will estimate the result set based on the average number of rows per value. Some of you may expect a result consisting of zero rows.

More information is available here: Teradata Statistics Basics

Finally, my counsel:

Keep records of the execution plans before and following the modification to evaluate the impact of the change.

2. The Primary Index Choice

Optimizing Primary Index selection necessitates balancing competing requirements.

Improved distribution and performance of data joining.

To merge two tables, they must have a matching primary index, indicating that the rows in both tables must be located on the identical AMP.

Maximize query efficiency by utilizing the Primary Index for joining whenever possible, as it is the most economical method.

AMP-local can only be joined when the joining condition includes columns that are not part of the primary index. If there are no primary index columns in the joining condition, relocating and potentially sorting rows in one or both tables is necessary.

Consider utilizing volatile or temporary tables with a distinct primary index to enhance performance. These tables should have the same structure and content as the original ones while having the necessary primary index. Utilizing temporary tables can be beneficial for queries involving numerous join steps.

3. Teradata Indexing & Partitioning

Indexes offer supplementary data access routes to the Optimizer, improving selective retrieval operations. Nevertheless, they take up permanent disk space and require upkeep in case of any modifications made to the underlying base table.

Indexing Recommendations:

It is recommended to promptly remove the Optimizer if it is not utilizing an index and does not contribute to PDM design to prevent unnecessary consumption of resources and storage.

A unique secondary index (USI) allows direct access to rows like a primary index. In contrast, a non-unique secondary index (NUSI) requires a full index subtable scan on all AMPs.

NUSIs offer an advantage over the base table when the index subtable is smaller. Covering NUSIs is particularly advantageous as they remove the need for base table lookups, resulting in no costs.

Indexes are sub-tables, while partitioning is another way to organize the base table. Partitioning allows the Optimizer to limit access to certain data blocks within a partition. The main advantage of partitioning is the regular use of partition elimination. However, using indexes requires specific requirements, such as gathering statistics on the index columns before using NUSI.

Indexing offers a significant advantage as we create and oversee its usage, discarding it if deemed unnecessary. Have you attempted to partition a 200 Terabyte table for experimentation? Presumably, this is not an optimal undertaking.

Another disadvantage of partitioning is worth considering:

Poor performance can occur due to partition mismatches or the absence of partitioning in a table.

When implementing partitioning, it is important to consider the data warehouse architecture and ensure your solution aligns with it. Use unique join techniques for partitioned tables to avoid suboptimal join performance caused by mismatched partitions.

Conclusion: There is no one-size-fits-all solution. It is important to identify the most effective approach for oneself.

4. Query Rewriting

Query performance can often be enhanced through query rewriting. Consider the following suggestions:

  • DISTINCT instead of GROUP BY depending on the number of different values
  • UNION ALL instead of UNION (getting rid of a sort step)
  • Splitting a skewed query into two parts. One for handling the skewed values, the other to deal with the rest
  • Adding WHERE conditions to allow for partition elimination
  • Removing unreferenced columns and expressions from the select list (could help to cut joins)
  • Converting outer joins into inner joins

Rewriting queries can enhance performance even when other techniques are unsuccessful.

Rewriting queries is an effective method for enhancing performance, but it typically necessitates comprehension of the business logic (“Is it possible to substitute an inner join for this left join?”).

Rewriting a query technically is feasible. However, comprehending the query’s business logic uncovers additional opportunities for optimization.

5. Physical Data Model Design

I hesitated to include the physical data model in this list due to the limited ability to alter the physical database design. Redesigning would be necessary to accommodate numerous consumers utilizing the database, including reporting tools and data marts.

Improving the physical model remains one of the most impactful changes.

The most valuable advice I can offer:

Normalize your core model and denormalize in your data marts. Although some may disagree, my experience has shown that premature denormalization can lead to poor performance, particularly when no clear justification exists.

Inspect all table columns to determine if the data can be further decomposed. Verify that they share uniform data types and character sets. Avoid columns that contain multiple data components, which may necessitate expression joins. Doing so could result in the Optimizer disregarding statistics and primary index utilization during the join process.

Define all primary key columns as NOT NULL and add appropriate default values. When storing codes, opt for non-UNICODE to avoid unnecessary space consumption.

Implement Multi Value compression on all tables to increase the number of rows fitting into each data block. As data blocks are the smallest unit transferred between disk and memory, having more rows per data block reduces disk IOs and improves performance.

Please note that the aforementioned advice is merely a general outline of potential solutions for repairing a defective data model.

6. Make Use of Teradata-Specific Features

Consider several distinctive optimization opportunities.

  • Using MULTISET tables can decrease Disk IOs
  • Use Multi-Statement-Requests, as this avoids the usage of the transient journal and does block optimization
  • Use CLOB columns instead of VARCHAR if you seldom select the column. Teradata stores CLOBs in sub-tables
  • DELETE instead of DROP tables
  • Use ALTER TABLE instead of INSERT…SELECT into a copy
  • User MERGE INTO instead of INSERT & UPDATE

7. Real-Time Monitoring

Observing a running query facilitates the identification of critical steps. Real-time monitoring is commonly done using Viewpoint, but I personally prefer using dbmon, a tool created by a Teradata Austria developer, despite Viewpoint’s sluggishness.

SQL performance issues stem from either of the following:

  • Skewed steps
  • Stale and missing statistics fool the Optimizer into creating wrong join decisions: Doing product joins instead of merge joins, duplicating instead of rehashing.

I monitor in real-time using my preferred method.

I wait for a step with skewed input or nonsensical estimated row counts for a join, like when two spools with millions of rows are joined using a product join. My optimization efforts are focused on these types of steps.

I will analyze the skew of the join column value if it is problematic. If estimations are inaccurate, I will review the statistics to ensure their currency.

Fixing statistical problems can be done swiftly, while skewed issues may prove to be more challenging. Query rewriting is typically a last resort unless an obvious and uncomplicated solution is available.

8. Comparison of Resource Usage

Measure resource usage pre- and post-optimization. Query run times are an unreliable test.

Extract appropriate steps from the query log daily using a SQL query. Assign a unique QUERYBAND to each query version to differentiate them. Access to “DBC.DBQLOGTBL” requires “select” privileges.

SET QUERY_BAND = 'Version=1;' FOR SESSION;
SELECT
   AMPCPUTIME,
   (FIRSTRESPTIME-STARTTIME DAY(2) TO SECOND(6)) RUNTIME,
   SPOOLUSAGE/1024**3 AS SPOOL_IN_GB,
   CAST(100-((AMPCPUTIME/(HASHAMP()+1))*100/NULLIFZERO(MAXAMPCPUTIME)) AS INTEGER) AS CPU_SKEW,
   MAXAMPCPUTIME*(HASHAMP()+1) AS CPU_IMPACT,
   AMPCPUTIME*1000/NULLIFZERO(TOTALIOCOUNT) AS LHR
FROM
   DBC.DBQLOGTBL
WHERE
     QUERYBAND = 'Version=1;'

The search results will display:

  • The total CPU Usage
  • The Spool Space needed
  • The LHR (ratio between CPU and IO usage)
  • The CPU Skew
  • The Skew Impact on the CPU

The objective is to reduce CPU utilization, spool capacity, and imbalance.

9. Tactical Workload

A tactical workload demands a specific skill set, without which even the best tuner will falter. I have witnessed entire projects flounder due to developers’ disregard for this crucial aspect.

The skillset needed for strategic tasks is distinctive.

I highly recommend reading this comprehensive post that explains all the details.

  • Collecting statistics on composite columns will help to improve estimations for highly correlated columns.
    Still, multicolumn statistics limit the queries for which the optimizer can use them.
    Since Teradata 14 the optimizer can use the first columns of multicolumn statistics.
    For example, if we have a multicolumn statistics on (a, b) and a where condition the optimizer can use these statistics.
    If the where a condition is on column bit can’t be used.

  • Avatar
    Sreeraj Kalappurakkal says:

    Thank you Very helpful

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    You might also like

    >