Teradata Performance Optimization – Part 3 (Fixing the broken Data Model)

Cost-cutting, unfortunately, leads to the unpleasant situation that many clients save on the wrong side, starting Data Warehouse projects where they should finish them.

As a Teradata Performance Optimization Analyst, you probably will be confronted with scenarios where there is no maintained data model at all, unclear business specifications, unyielding and incomplete mappings.  You can consider yourself lucky for every piece of information you can squeeze out of your customer’s working environment.

In my experience, most of the time you are not in a position being  responsible for performance. This task is unfortunately often deemed as being an exclusively technical task.

However, although several performance problems can be fixed providing  purely technical expertise, you will probably end up with a lot of workarounds not correcting the root causes of your performance problems.

Despite such a purely technical approach may be applied for a very long time, adding up the costs at the end of the day will leave you in shock! Unfortunately this head-in-the-sand politics is daily routine in many companies.

As a performance specialist, you have to be business analysts, data modeller and developer at once. I think this is a very important insight: Performance Optimisation often evolves into  fixing the broken data model.

It is your task to make good for the failures in all these areas of expertise in the past. Try to contact people involved or responsible for these roles in the past at this client.

Sad to say, often you will be confronted with uncooperative behaviour as people tend to defend their petty areas of expertise. At the end of the day, you are questioning their work results from the past.  You are on a very delicate mission. Strong management support would probably  make your life easier.

Fixing a poorly designed data model is like a time travel. You have to go back to the start of the project, get to know the original business requirements, question how they have been transformed into the existing data model and why. You have to evolve a behaviour of questioning all past decisions done.

Most times it was for lack of budget, a wrong assignment of people to roles, lack of time or simply missing experience that turned the project into a big mess and a failed state. Still, in my opinion, the most outstanding cause is over-specialization of project members. Over-Spezialisation leads to evaporation of responsibilities. Everybody is shifting problems back and forth, lots of resources are wasted finding the responsible person until a problem finally is solved.

One approach I prefer,  if I  know that only a redesign of the data model can definitely solve performance issues, is to create a small prototype, demonstrating improvements.

I would take an assessable subject area and redesign the chain. Using such a prototype as a communication tool can make the difference between getting the chance of fixing the performance problem or just getting an answer like “would be nice, but we don’t have the budget”. The more tangible your approach, the better.

As always, success has to be measurable. Many times, reporting will be at the end of the chain and making some reports performing better on top of your prototype would be a good starting point for showing your expertise in Teradata performance optimization.

I hope the main messages from this article is clear:

These days, being just a highly specialist developer is not enough. As an performance specialist you have to understand the data warehouse life cycle.

Roland Wenzlofsky
 

Roland Wenzlofsky is a graduated computer scientist and Data Warehouse professional working with the Teradata database system for more than 15 years. He is experienced in the fields of banking and telecommunication with a strong focus on performance optimization.

>