Query tuning typically involves changing the way different objects are put together.
There is a second approach to achieving results faster when altering the SQL is not an option or already done, namely to exchange the objects you retrieve your data from. We could accelerate many queries and meet a net speed-up by introducing intermediate objects to a daily load job chain.
Here are some options of object-based query tuning that worked well for us.
Usage of volatile tables
When your first query has subqueries that need to be created several times for structurally equivalent row sets, summarize these subqueries into one and define a volatile table with the result set. Large, skewed, or otherwise “bulky” data sets can be preselected and stored independently of well-running query parts. Reference the volatile table when the subset is needed. You can index and partition your volatile table independent of source table settings. Furthermore, any preparatory row operation such as substrings or mathematical operations on source columns can be materialized into the volatile table and become subject to indexation or partitioning.
Regular loads with layer tables
If you retrieve a subset of data defined similarly, consider creating layer tables first. They are structurally identical copies of source tables, only with a fraction of the original data volume. For example, if you need to work on the most recently loaded rows of your source tables and this fraction is small compared to the entire table content, create a table that is identical to the source table and load the fraction into it first. You can re-index or repartition if needed. Collect statistics on your layer table and adapt your SQL accordingly.
We create layer tables as normal physical tables instead of volatile ones because they will be reused by many queries or stored procedures in a row so that more than one session could be involved.
We used this strategy to cut down the average daily load time from 6 hours and more to 5 hours. Note that this is a net change that includes the extra runtime for the layer table buildup and statistics collection on them!
Tailored auxiliary tables
We call an auxiliary table any user-defined extra tables that serve other needs than holding a row layer of data from the source tables.
When filtering data from large tables based on LIKE or NOT LIKE conditions containing a lot of non-adjacent code or value enumerations, you can create a volatile reference table with just these codes and transform the (NOT) LIKE search into a join condition on the volatile table. By this, cet. par. You enable Teradata to move to the right set of values faster.