The Teradata Fastload

0
490

 

In my career as Teradata professional, I have seen a lot of performance problems caused by the inappropriate use of fastloads. As we all know, each Teradata System has some limit on the number of parallel fastloads allowed. We should better be careful in choosing how we get data into the Teradata Database.

I would never recommend using the Teradata fastload utility for loads of less than 100.000 rows. 100.000 rows are just a rule of thumb, but a good starting point for loading on Teradata.

There are several options of how to get data into a Teradata database, each of them with its advantages and disadvantages. Usually, I use BTEQ loads if I load less than 100.000 rows, TPUMP to load up to 500.000 rows and probably I will use fastloads for more than 500.000 rows. Always pay attention to very wide rows as this could be a reason to use fast loading even if the number of rows points to a TPUMP or BTEQ load.

I remember one client who was loading almost every single table with fast loads, even the empty ones. As a result, nightly batch loads very heavily delayed as load jobs had to wait for free fast load slots and got queued. After a redesign, changing all small tables being loaded (with an arbitrarily chosen limit of 100.000 rows) to TPUMP, load times decreased by about 50%!

My experience is that the easiest way is to use DBC.DBQLOGTBL to determine the load type. Query the rows with where column StatementType equals “Fastload.” The column “UtilityRowCount” is the number of rows which has been loaded. All loads with a UtilityRowCount lass than 100.000 rows should be candidates for being changed to BTEQ or TPUMP.

 

 

Our Reader Score
[Total: 2    Average: 3.5/5]
The Teradata Fastload written by Roland Wenzlofsky on March 8, 2014 average rating 3.5/5 - 2 user ratings

LEAVE A REPLY

Please enter your comment!
Please enter your name here