fbpx

To understand the relationship between the Teradata parsing engine (PE) and your overall system performance, it is essential to know the parsing engine’s tasks in the Teradata system architecture.

Aside from parsing the statement, optimizing the SQL, creating the processing steps, and the dispatching to the AMPs, the Teradata parsing engine takes care of access rights.

Without going into further details about the dependencies between roles and access rights on a Teradata system, to solve our problem, it is only essential to know that single object permissions are stored in the table DBC.AccessRights.

For example, whenever you create a table, a bunch of permissions will be assigned to your user (the one you have been using to log into the system):

CREATE TABLE <DATABASE>.< TABLE>(
PK INTEGER NOT NULL
) PRIMARY INDEX (PK);

For convenience we will use the view DBC.AllRights instead of DBC.AccessRights to verify above statement. This view uses object names instead of the object identifiers internally used by Teradata, making it easier for us:

SELECT
UserName,DatabaseName,TableName,AccessRight,GrantorName,CreatorName
FROM
dbc.allrights
WHERE
DatabaseName = ” AND tablename = ”;

UserNameDatabaseNameTableNameAccessRightGrantorNameCreatorName
<USER><DATABASE><TABLE>ST<USER><USER>
<USER><DATABASE><TABLE>DG<USER><USER>
<USER><DATABASE><TABLE>RS<USER><USER>
<USER><DATABASE><TABLE>U<USER><USER>
<USER><DATABASE><TABLE>DP<USER><USER>
<USER><DATABASE><TABLE>RF<USER><USER>
<USER><DATABASE><TABLE>IX<USER><USER>
<USER><DATABASE><TABLE>R<USER><USER>
<USER><DATABASE><TABLE>I<USER><USER>
<USER><DATABASE><TABLE>D<USER><USER>
<USER><DATABASE><TABLE>DT<USER><USER>
<USER><DATABASE><TABLE>CG<USER><USER>

As you can see from above result set, after creating a table, 12 separate permission records are stored in the DBC.AccessRights table such as update, retrieve, delete, restore, drop the trigger, etc.

The exact number of permissions stored depends on the Teradata Release and probably additional parameters.

Although the grants may often be available from the cache, whenever the required permits cannot be fulfilled by a cache hit, the parsing engine has to look up the access rights from the DBC.AccessRights table (see the DDL below):

CREATE SET TABLE DBC.AccessRights
(
UserId BYTE(4) NOT NULL,
DatabaseId BYTE(4) NOT NULL,

) PRIMARY INDEX ( UserId ,DatabaseId );

As in most other DBC tables, hashing is used to distribute the table records across all AMPs, and skewing may become a problem (the primary index of DBC.AccessRights is {UserId, DatabaseId)).

In the case of your daily batch process, it could, for example, happen that an enormous amount of tables is created in your staging area by exactly one batch user and massive skewing in the DBC.AccessRights table could be the result.

As the parsing engine needs to verify user permissions, parsing times will increase, slowing down the overall system performance.

How can this problem be scaled down?

The user creating an object by definition implicitly already owns all access rights and the records written to DBC.AccessRights are redundant.

We can delete these records with below SQL approach:

SELECT
‘REVOKE ALL ON ‘|| TRIM(a.databasename)|| ‘.’ || trim(a.tablename) || ‘ from ‘|| trim(a.username) || ‘;’ (TITLE ”)
FROM dbc.allrights a
INNER JOIN
DBC.TABLES b
ON
a.DatabaseName = b.DatabaseName
and a.tablename = b.tablename
where   a.grantorname = ‘<USER>’
and   a.DatabaseName
in     (
select d.databasenameI
from   dbc.dbase d
where   d.rowtype = ‘D’)
and   a.username   = a.grantorname
and a.accessright not in (‘IX’,’RF’)
and b.tablekind = ‘T’
GROUP BY 1;

Whenever the UserName equals the GrantorName, the record can be deleted from the DBC.AccessRights table.

After cleaning up your DBC.AccessRights table, you should experience faster parsing times, which you could easily verify in the DBC.DBQLOGTBL.

Just be aware that each DROP & CREATE of a table adds the permissions again to DBC.AccessRights, therefore, is always a good idea to DELETE content from your target tables instead.

__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"62516":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"62516":{"val":"var(--tcb-skin-color-0)"}},"gradients":[]},"original":{"colors":{"62516":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"b4fbe":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"b4fbe":{"val":"rgb(241, 99, 52)"}},"gradients":[]},"original":{"colors":{"b4fbe":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
Previous Article
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"b4fbe":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"b4fbe":{"val":"rgb(241, 99, 52)"}},"gradients":[]},"original":{"colors":{"b4fbe":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
Next Article
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"62516":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"62516":{"val":"rgb(255, 0, 0)"}},"gradients":[]}}]}__CONFIG_colors_palette__
GET OUR TERADATA BOOK

Roland Wenzlofsky

Roland Wenzlofsky is an experienced freelance Teradata Consultant & Performance Trainer. Born in Austria's capital Vienna, he is building and tuning some of the largest Teradata Data Warehouses in the European financial and telecommunication sectors for more than 20 years. He has played all the roles of developer, designer, business analyst, and project manager. Therefore, he knows like no other the problems and obstacles that make many data warehouse projects fail and all the tricks and tips that will help you succeed.

  • Good post! our table is skewed 99%. I guess the real question is, why is Teradata inserting these redundant rows and why is such a bad PI in use? (in Teradata 15 they change it to just user_id and partition the dbid)

    PRIMARY INDEX ( UserId )
    PARTITION BY ( RANGE_N((ID2BIGINT(DatabaseId )) MOD 1073741824 BETWEEN 0 AND 1073741823 EACH 1 ),
    RANGE_N(ID2BIGINT(TVMId ) BETWEEN 0 AND 4294967295. EACH 1 )ADD 2 );

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    You might also like

    >