Teradata 1MB Rows for Strategic Queries

By Roland Wenzlofsky

October 4, 2017

From time to time you may have experienced the following error when running an SQL SELECT statement:

3577 Row size or Sort Key size overflow

There were always limits for the number of bytes a data row in Teradata was able to hold. A long time ago (before Teradata Release 14.10), a 64 Kilobyte limit was in place for all intermediate spool tables (such as the result of derived tables) and the final result set.

With the introduction of large cylinder systems, Teradata raised the limit for spool tables to 1 Megabyte, but the row size of the final result set still was limited to 64 Kilobytes. If I correctly remember, this improvement came with Teradata 14.10 (correct me if I am wrong).

While 64 Kilobytes sounds a quite huge number of columns which can be packed into the result set, the usage of wide UNICODE character columns could easily exceed this limit.

Of course, there always are some workarounds for such issues, such as limiting, whenever possible, character columns to LATIN or passing only the really needed columns to the outer SELECT of a query in order to stay below the 64 Kilobytes for the final result set (still being able to use rows with up to 1 Megabyte in the derived tables of a query).

With Teradata Release 16, these workarounds are not needed anymore: Additionally to the spool rows, the response rows of the result set can now be up to 1 Megabyte.

Here’s some more good news: At the same time the row size for many database objects has been increased to 1 Megabyte, here are the most important ones:

  • Base table rows
  • Global temporary table rows
  • Volatile table rows
  • The rows of Queue tables
  • Columnar rows
  • The Rows of USI and NUSI
  • The rows of Join Indexes
  • The Rows of Hash Indexes
  • The Output rows of Stored Procedures

1 Megabytes rows are enabled by default on Teradata 16 systems with large cylinders.

Of course, the most important advantage is that 1 Megabyte rows can store more and wider columns. Depending on your database design strategies, this will help you to reduce the number of joins (by getting rid of the vertical splitting of tables) and therefore may be enhancing your query performance.

But of course, there is “no such thing as a free lunch”. Here are the main disadvantages of 1 Megabyte rows:

  • Obviously, they will consume more disk space
  • Bigger rows require more data to be moved between storage devices and CPU. This can lead to a decreased performance
  • The transient journal and the write-ahead log (WAL) will require more space
If you have any questions about all this, please ask in the comments! I’ll be paying close attention and answering as many as I can. Thank you for reading. Whatever this blog has become, I owe it all to you.

Teradata Compression: Should you bother?
Compression in Teradata Columnar
Improve your Teradata Load Performance
Teradata Golden Tuning Tipps 2017 – Take your Skills to the next Level!

__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"62516":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"62516":{"val":"var(--tcb-skin-color-0)"}},"gradients":[]},"original":{"colors":{"62516":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"b4fbe":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"b4fbe":{"val":"rgb(241, 99, 52)"}},"gradients":[]},"original":{"colors":{"b4fbe":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
Previous Article
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"b4fbe":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"b4fbe":{"val":"rgb(241, 99, 52)"}},"gradients":[]},"original":{"colors":{"b4fbe":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
Next Article
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"62516":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"62516":{"val":"rgb(255, 0, 0)"}},"gradients":[]}}]}__CONFIG_colors_palette__

Roland Wenzlofsky

Roland Wenzlofsky is an experienced freelance Teradata Consultant & Performance Trainer. Born in Austria's capital Vienna, he is building and tuning some of the largest Teradata Data Warehouses in the European financial and telecommunication sectors for more than 20 years. He has played all the roles of developer, designer, business analyst, and project manager. Therefore, he knows like no other the problems and obstacles that make many data warehouse projects fail and all the tricks and tips that will help you succeed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

You might also like