Teradata Statistics for High-Performance Queries

Roland Wenzlofsky

June 25, 2015

minutes reading time

Introduction to Teradata Statistics Design for high performance

Today I attended an excellent presentation about the Teradata statistics improvements presented by Thomas Mechtler, a very experienced Senior Consultant at Teradata Austria. In this article, I carved out some of the major points I wanted to share with you.

Teradata statistics maintenance was as tricky as crucial in the past, but this changed greatly with the introduction of Teradata 14.10. It’s not tricky anymore (but still important):

New database features and tools simplify the statistics collection process, making the life of the database administrator easier

Some of the central questions about statistics usually raised are:

  • Which statistics should be collected?
  • Where in the ETL/ELT process should they be collected?
  • How often should they be collected?
  • Which statistics are worthless because they are never used?

The problem with the above questions is that until Teradata 13.10, they couldn’t be answered quickly.

For example, when we had to find the best place in the ETL process, there was no one-size-fits-all answer:

What, for instance, if several transformation steps are executed one after the other, populating the same target table used again by each transformation step? Would you collect statistics repeatedly after each transformation step to support each step with the correct statistics (accepting the additional resource consumption)? Would you wait until the last transformation process is finished (taking a possible sub-optimal execution plan)?

Similar is the situation when it comes to the frequency of statistics recollection:

While some of your tables might change so fast that a daily collection is required, other tables might require only a weekly recollection. How would you handle this situation? By running two recollection processes, one weekly and one daily? Or would you accept inaccurate statistics and only collect statistics once a week? Or even take the resource consumption overhead by running a daily recollection process?

Whatever your decision was: It came with advantages and disadvantages, namely a tradeoff between resource usage and estimations accuracy.

Luckily, the situation improved a lot with Teradata 14.10:

Teradata Statistics Improvements on 14.10

Teradata 14.10 introduces several new features that solve, or at least ease, above mentioned problems. These improvements  help us to:

  1. Identify unused statistics
  2. Skip the collection of statistical data if data demographics are unchanged or are below a certain threshold
  3. Identify missing statistics that the Optimizer would need to make a better execution plan.

Some technical requirements must be fulfilled before we can use the additional functionality. If we want to skip statistics recollection for unchanged data demographics, object usage counts (OUC) have to be activated:


The previous statement turns on object usage count logging for all database objects in the database “TheDatabase.” Statistics are database objects like any other object, such as tables, views, etc. This is how it works.

Once OUC is activated, each object access (or what’s interesting for us: access to a statistic) is counted in the DBC table DBC.ObjectUsage

DBC.ObjectUsage and some other DBC tables offer some valuable information that helps us to identify unused statistics:

When was OUC activated for the considered statistics?

When were the statistics added?

When were the statistics used the last time by the Optimizer (probably the most important question)?

I showed you how to detect unused statistics, but what about those missing (Those statistics which the Optimizer would need to build its execution plan)?

Two additional logging options have been implemented in Teradata 14.10, giving us precisely this information:


XMLPLAN gives you information about statistics the Optimizer would have needed, but they were not available. This information is available for each step of your query. Further, it outputs the number of estimated rows and the number of actual rows. Unfortunately, as the name suggests, it is the output in XML format and requires coding and parsing.

STATSUSAGE is easier to use as a regular table but less detailed than the XMLPLAN output. Still, it’s much better for SQL tuning than the traditional approach, which is:


Is it time to recollect? Lean back and let Teradata decide for you!

While the information we mentioned in the previous chapter helps us to find unused statistics, in this section, we are covering the remaining problems we mentioned at the beginning of this article:

When and how often should you recollect statistics?

I have good news: Teradata 14.10 implements an “autopilot mode.” Predefined measures and threshold levels can trigger statistics recollection – but only if OUC is enabled!

Teradata uses the UDI counts (Updates, Deletes, Inserts), which are part of OUC, and historical statistics histograms to decide if statistics are stale. UDI counts count how many rows have been deleted or inserted, and it collects information about updates on the column level.

Consequently, you can execute the collect statistics statements as often as you like without overloading your system:

The actual collection process is only activated if Teradata is convinced that the statistics are stale!

No more compromises. The two central questions, “When should I recollect? How often should I recollect?” just faded away…

In “autopilot mode” (default on Teradata 14.10 with UOC enabled), the impact of design flaws in your statistics recollections process is minimized.

It would help if you didn’t drop statistics before recollecting them, as you would lose all historical statistic information.

Just by the way, threshold levels can be manually defined by you, but most probably, you should let Teradata do this job for you:


The above example would trigger recollection whenever 14 days passed and table cardinality changed by at least 5% (the Optimizer uses UDI and historical statistics histograms for its analysis).

A few last words: Most of you will be confronted with a historically grown statistics framework. With Teradata 14.10, it’s time to replace it and use the new features. Don’t throw away what you have. Just think about how the new process could supersede your existing one. Fading out your current solution may take some time, but it’s worth it.

  • Avatar
    Artemiy Kozyr says:

    Hey Roland. Thx for the article ?

    Do you think it is possible to view use counts for columns on particular tables and databases using DBC.ObjectUsage ?

    I would like to use it to determine most frequently accessed column to improve query performance. Might be useful to review PI / SI.

    Is it counting SELECT statements or UDI only ? Is it user level or system level statistics ?

    Might be very useful. I might test it in couple of days.

    Thank you!

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    You might also like