Use TOAD

The TOAD logo

The TOAD logo

After the first part of guidelines and standards I’d like to do a little interlude and introduce TOAD.

Whenever I start a new data warehouse or database project I warmly recommend to the client to purchase a handful of TOAD licences for the development team. Sometimes even the free version will do the job.

For me, TOAD is an indispensable tool for developing and debugging stored procedures, SQL statements, and database objects. Compared to TOAD, the built-in tools of the database vendors like SQL Server Management Studio (SSMS) or Oracle SQL Developer appear somewhat ridiculous.

There are versions for many DB systems like Oracle, SQL Server, DB2, MySql, SAP, Hadoop and there is also an agile and vidid community.

Just my two cents for the weekend. And, BTW, I’m in no way associated with Quest. I’m just a happy and pleased user for many years.

The ETL process – Part 2 – Guidelines & Standards

Guidelines lead the way

Guidelines lead the way

After achieving some results from the analysis of the ETL process (Part 1 – The Analysis), it quickly becomes evident that it is not sensible to finally come up with an “egg-laying woolly-milk-sow” (as we say in German: “eierlegende Wollmilchsau” :-)).
However, if the analysis has been complete and painstaking, the requirements for the ETL process should be clear after that. Despite the fact that there should be room for extensions and new functionalities (especially in an agile environment), clear-cut red lines should be drawn.

Here are the first two of my top four list of guidelines:

  • Beware of too much proprietary or closed-source software.

    My best friends, the big-scale ETL tools and frameworks, fall into that category.
    Some people, especially staff-members of big consulting companies, would strictly contradict. Usually, their chief argument is that the use of tools or frameworks decreases the degree of dependence on software developers. That is nothing but the truth!
    But what’s finally also true is that the use of tools or frameworks increases the degree of dependence on staff-members of big consulting companies or software vendors, and on special developers, who are proficient in those tools and frameworks.
    Without a doubt you won’t find people like these around every corner. To top it all off they are usually significantly more expensive than developers, who are not that highly specialized in the development or customizing for a very specific product. And finally there is the cost for the products themselves.
    On the other hand, chances of finding some really brilliant developers with excellent skills in the fields mentioned below are much better. Even after most of them have left the team after the system has gone into production, it should not be too difficult to find new developers when it’s necessary. If there hasn’t been a knowledge drain and the system is well documented, the integration of new team members should be quite smooth.

    Skills needed or appropriate for development of the entire ETL process (strongly IMHO):

    • SQL, T-SQL, PL/SQL for Stored Procedures
    • C# for SQL Server CLR Stored Procedures
    • Java for Oracle Java Stored Procedures
    • bash or power shell for shell script programming
    • Shell tools like grep, awk, sed, etc.
    • ftp script programming
    • cron job or scheduler programming

  • Data sources are responsible for their own data quality.

    In literally all of my data warehouse projects and without exception we have discovered data quality issues in the source systems. Fortunately, this usually happens at a quite early stage of the project. The start of a data warehouse project can eventually even take credit for the discovery of serious flaws in source and/or legacy systems.
    The data warehouse process must not be the sweeper that eliminates the slip-ups from earlier stages of the data flow. We all know that nothing lives longer than a quick workaround. There is no doubt about the necessity to provide these workarounds to avoid showstoppers; especially at an early stage of the production phase. But, by all means try to get rid of them as quickly as possible. They turn out to be a heavy burden in the long run.
    Sometimes the need to promote data quality on an enterprise-wide level might occur. It could be necessary to escalate those issues through the corporate or department hierarchy; sometimes even up to the CIO and/or the CTO.

The subject of next post will be the remaining points of the top four guidelines list.

Please feel free to register any time. 🙂

 

The ETL process – Part 1- The Analysis

Diagram of a part of an ETL process in a typical graphical ETL tool

According to the principles I have shortly described in the first post of this blog (http://dwhblog.org/what-to-expect-from-this-data-warehouse-blog/), the very first step is the analysis.

I have always been quite annoyed by graphical ETL tools. This is surely a matter of personal taste, but I think that most of those systems are quite redundant and not very efficient in modeling the ETL process. Plus, many of them are quite expensive and I hardly ever saw the real countervalue of their effect on the project budget.

In one of my biggest DWH projects we have evaluated a handful of the leading ETL tools on the market. None of them was really able to cope with all of our requirements. So we finally came to the conclusion to develop an ETL system by ourselves. The overall cost turned out to be significantly lower than the yearly price of even the least expensive commercial tool.

After thorough analysis and development we had an ETL system that consisted mainly of shell scripts, human-readable files for configuration and metadata, and stored procedures. Of course we also established a consistent system for the documentation of the process and its parameters.

All the findings from this analysis (and more) have been the basis for a data warehouse framework I have developed. This framework is no longer using shell scripts, but is completely hosted in the database system.

Main results of the analysis

The following paragraph describes the main findings of the analysis from a bird’s eye view. However, not all of the aspects are mentioned here. There will be special posts dedicated to describing everything in greater detail.

Data delivery structure:

  • Data sources deliver data in data delivery groups.
  • Data delivery groups encapsulate one or more interfaces.
  • Interfaces consist of data fields.
  • Data fields are characterized by data type, length, domain, etc..

Staging area:

  • Data from interfaces flow into the staging area.
  • The staging area is divided into different staging levels.
  • The data flows through the staging levels according to a well-defined staging priority.
  • Because staging can be very time-consuming, the ETL process should be able to dynamically minimize staging depending on the actual data input.
  • Master data is part of the staging area.

Multidimensional data:

  • Part of the data of the staging area flow into the multidimensional data area.
  • Multidimensional data is made up of dimensions and facts.
  • Combinations of dimensions and facts form data cubes.
  • Different cubes can share dimensions, but use different hierarchies or consolidation paths.
  • Multidimensional data consists of different levels of consolidation.
  • Because consolidation can be very time-consuming, the ETL process should be able to dynamically minimize consolidations depending on the actual data input.

Process data:

  • Process data consists of log data, status data, protocol data, etc..

User access and security data:

  • User access and security data define the association of users with certain slices of the data cubes.
  • They are also important for data-driven subscriptions for automatic deployment of reports.

Map and geospatial data:

  • Even though map and geospatial data are part of the interfaces and staging area, they often need some special processing.

Data marts:

  • Subject-oriented data marts are data sources for associated BI systems.
  • Data marts are built from subsets of multidimensional data, master data, process data, user access and security data, and map & geospatial data.
  • The structures of the data cubes in a data mart don’t necessarily match exactly the structures of the cubes in the DWH.
  • Because data mart maintenance can be very time-consuming, the ETL process should be able to dynamically minimize data mart maintenance depending on the actual data input.
  • The downtime resulting from data mart maintenance must be as short as possible.

 

After this overview of the main results of the analysis, the subject of the next post will be “guidelines and standards”.

Please feel free to register any time. 🙂