Innovation can generate excitement in IT markets. Although parallel computing is no longer considered groundbreaking, the emergence of Hadoop has democratized its accessibility.

The marketing hype surrounding Hadoop can often obscure its actual value and related topics.

Big data is a buzzword that has caused confusion and numerous incorrect statements. Some individuals claim that Hadoop and Map Reduce will replace the traditional Data Warehouse, but I have doubts about the accuracy of this prediction.

Teradata and Hadoop have similar technical features, functioning as parallel systems utilizing shared-nothing architecture.

Hadoop is a superior solution due to its limitless scalability and capability to amass and process vast quantities of data via HDFS and MapReduce.

A bottleneck can arise at a particular stage in a Teradata Data Warehouse. While additional nodes can enhance Teradata’s scaling capabilities during database data loading, gathering, storing, and loading substantial quantities of data, otherwise known as big data, is still necessary. The Hadoop framework is a viable solution for addressing constraints originating from network bandwidth, ETL servers, and storage devices.

Hadoop enhances the value of your Teradata Data Warehouse by gathering and processing vast amounts of data before its ingestion into the warehouse.

HDFS is a cost-effective way to store data instead of leaving it unconsolidated online, storing it on tape, or deleting it after loading.

Hadoop and MapReduce are powerful parallel ETL servers that preprocess incoming data for advanced processing.

In a standard ETL-Server environment, processing options for data are limited to “SQL-like” methods. In contrast, utilizing the Hadoop/Map-Reduce methodology allows for unlimited processing capabilities on incoming data.

After the initial excitement, I believe these measures will aid in integrating Hadoop into your data warehousing.

Leading database providers are incorporating Hadoop’s features into their relational database management systems (RDBMS) at the highest level. The long-term efficacy of this integrated approach is yet to be determined. Personally, I remain unconvinced.

Some attempts to participate in the big data trend lack credibility. Teradata’s implementation, SQL-H, only includes a portion of a fully-equipped Hadoop infrastructure. Despite parallelism being possible through parallel pipelines to the HDFS, advanced features like pushing down joins or aggregations are not accessible, leading to reduced performance.

Developers should explore alternative solutions similar to Hadoop in light of the current situation. It’s crucial to take into account the abuse of ETL tools.

If the ETL server is not meeting performance standards for a particular transformation, incorporating the mapping logic into pre-aggregation steps in the RDBMS may be necessary. While this method does disrupt the data lineage, it is currently considered advanced. Nonetheless, removing these implementations may present difficulties in the future.

Teradata’s SQL-H syntax is a problematic blind spot that lacks convenience. Furthermore, it is unsupported by many 3rd party tools and both ANSI SQL and Teradata SQL for a prolonged period.

I am optimistic about Hadoop and MapReduce’s potential, but I am apprehensive that some implementations may yield subpar results, necessitating replacement with a superior alternative. Unfortunately, this could entail significant client redesigns to exploit the capabilities fully.

Although big data has received significant attention, numerous firms appear hesitant to embrace these novel technologies. Additionally, I assert that database suppliers should exercise restraint. Despite its capacity to manage vast quantities of data, countless clients continue to encounter difficulties with the basics of data warehousing. I recollect when “active data warehousing” was recognized as a significant advancement.

  • Hi. Yes, after 4 years the picture is still the same. My clients are big companies in the finance and telecommunication markets. All of them have been quite enthusiastic about big data. None of them implemented any useful application in all these years. Structured data stored in a traditional Data Warehouse is still the main source of information. The maximum they achieved was considering Hadoop as a kind of cheap mass data storage…very disappointing and eye-opening.

  • Avatar
    Aravindh K says:

    SO this Teradata and hadoop article was written in 2014. Its 2018 now !! Is your opinion still the same ?

    Even I work as Teradata Developer and trying to understand if I need to start learning Hadoop.

    Thanks for helping.

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    You might also like

    >