The Teradata Node Review

By Roland Wenzlofsky

May 8, 2014

What is a Teradata Node?

Teradata Nodes are Linux systems (several nodes are packed together into one  Teradata cabinets) with several physical multicore CPUs and plenty of memory. On top of the Linux operating system, the parallel database extension software (PDE) is executed.

On each node, the primary processes of a Teradata Systems are being performed (see our article about the Teradata high-level architecture):

– The Parsing Engines
– The AMPs
– Two redundant BYNETs for the communication between AMPs and Parsing Engines.

As we already know, there are many parallelisms built-in within one node as the workload is distributed evenly across all AMPs.

Scalability is one of Teradata architecture’s main benefits; multiple nodes can be interconnected to an even bigger system.

In theory, doubling the number of nodes would cause a doubling in performance. In real life, this is a fairy tale.
You will spot the problem with this theory of linear scalability quickly if you think about the fact that this would require perfect parallelism in your workload.

But as all of us know from practice, we always are fighting with a skewed (wrong distributed) workload. We could have hundreds of nodes; they will not help us in performance if our SQL statement’s workload ends up on one AMP. Keep this in mind when adding nodes to your system to avoid disappointment.

Further, from a fault tolerance point of view, we are limited regarding the number of nodes. Such architectures are not fault-tolerant. A Teradata system scaling up to thousands of nodes is not possible. Concepts like hot standby nodes may relieve the situation slightly but are a costly method of buying some fault tolerance.

In the terminology of parallel systems, a single node is called an asymmetric multiprocessing node. Any system containing at least two nodes is named a massive parallel system (MPP).

While the communication network (BYNET) within one node is realized as a piece of software, the network between nodes obviously has to be implemented in hardware. Still, the purpose is the same: Allow AMPs and Parsing Engines to communicate with each other, even across different nodes.

For performance and fault tolerance reasons, there are always two BYNETs available.

As long as both networks operate without errors, they are used simultaneously to increase the flow rate. In case one of the networks fails, there is still a backup available, and the systems continue its operation. Only the failure of both networks would make the Teradata inoperative.

While some years ago, the BYNET was one of the significant advantages of Teradata as it takes over the tasks of sorting and merging data, relieving the CPU, today, with the availability of multi-core processors, this benefit probably is not significant anymore. The change may is related to the switch from the proprietary BYNET to InfiniBand as the new backbone for data transmission.

Buy now at Amazon
  • Hi Falcon. Thanks. I added the link.

    Regarding your question: Yes, each AMP has its own working memory. The so-called FSG-cache of each AMP is used to hold the data blocks read from disk.

  • Roland – Thanks for an informative article. I have 2 comments/questions:

    1. Can you also include the link in the article to ‘Teradata high-level architecture’ that you have referred to in your second paragraph?

    2. How is TD’s RAM organized? Does each AMP have its own memory in addition to dedicated disk space? If not, can TD still be considered a shared-nothing architecture RDBMS?

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    You might also like