In Part 1 of this blog, I gave a high-level comparison of traditional extract, transfer and load (ETL) tools, desktop data preparation tools and Birst’s modern, built-for-the-cloud ETL tools for data analytics. In this blog, I’ll dive deeper into the eight key ways that, of the three options, Birst is best “able” to meet the rigorous requirements of today’s enterprise users.

Purpose-built ETL for data analytics

To recap, the right data preparation solution for modern analytics is highly “able” – able to be used by any level of data consumer, from casual business analyst to IT developer. The ETL process is able to receive input from any source and output transformed data to an analytic data model. It must be architected for modern multi-tenant cloud use, with appropriate user interfaces for both business and IT audiences. Importantly, the data prep process must be able to support users’ insatiable demand for data, placing no limits on data volume, source complexity or user counts.

The diagram below illustrates the interconnected nature of the capabilities that, collectively, form the hallmark of a highly “able” modern data preparation tool.

Let’s dive deeper into each capability, to explore exactly what it comprises.

  1. Approachable: The solution needs to be appropriate for the job role. For business end users, that translates into an easy-to-use graphical interface (GUI). In addition to the GUI, developers can also access a programming interface. Ease of use has always been an important trade-off between ETL tools and data prep tools. ETL tools have a developer focus and a long learning curve. Data prep tools, while easy to use, often have reduced functionality. Within the Birst solution family, there is a user-friendly experience designed for business end users and a more powerful, feature-rich experience designed for developers. Data prep development can easily switch between the two.
  2. Accessible: The solution should be browser based, to leverage the skills most business users have. Access via browser puts the power of data preparation in the hands of those who need it and eliminates the expensive tedium of desktop installs, patches, upgrades and security.This is a significant improvement over enterprise ETL tools, which tend to have desktop GUIs and require server installations, and data prep tools, which generally require desktop installations. The Birst solution is browser-based, regardless of location of source or target data, and no desktop or server installation is required.
  3. Programmable: The solution should not be limited to the GUI’s built-in transforms, because complex data transformations often require greater functionality. The solution is a programming interface, which enables any manner of transformation to be created. Enterprise ETL tools are very programmable but put an additional burden on IT organizations. Desktop data prep tools are generally limited to pre-built transforms only. With Birst, both graphical transformations and programming are possible. Birst Pronto transforms enable drag-and-drop data transformation, and a Birst Custom transform object enables any transform to be scripted.
  4. Scalable: The tool should support unlimited scalability, with no limits on data volumes (either input or output), the number of data sources or the number of transformations.Complex enterprise ETL tools are generally very scalable but desktop data prep tools are limited by desktop resources. Because it is built for the cloud, Birst places no limits on data input volume, data output volume, data sources or number of transforms.
  5. Flexible: The data prep tool should be flexible, supporting multiple input and output options, and easily react to data source changes. Enterprise ETL tools have significant flexibility. Conversely, desktop data prep tools generally have fewer connectivity and input/output options. With Birst, flexibility is fundamental. The Birst solution connects to any data source, in any location. It can output relational or file outputs to any target. Birst is also built for change, automatically integrating new attributes and measures to enable, in turn, rapid integration of new data sources.
  6. Networkable: The data preparation efforts of each user should be available for others to leverage without re-creating connectivity or logic. Neither enterprise ETL nor desktop data prep tools have networking functionality, but with Birst, each data prep project can be “networked” – linked – with any other project. This creates a shared fabric of data for analysis while eliminating data silos, data duplication and “logic divergence.”
  7. Repeatable: The tool should allow for full automation of the entire data preparation process (source, integration and preparation, output), with no manual intervention required. Cumbersome enterprise ETL tools have robust scheduling capabilities, but desktop data prep tools often have no built-in scheduling. Birst enables every step in the data preparation process to be automated via scheduling and/or triggering. Automation can be driven either by Birst’s built-in scheduler or piloted by any enterprise scheduling tool.
  8. Extensible: Data preparation should not be limited to what a single tool can do. It is essential that the data preparation process be extensible to a “network” of services. The Birst solution can leverage external data preparation technologies. For example, scripts written in the R programming language can be integrated into Birst data preparation routines.

Birst offers purpose-built ETL for data analytics

Birst has taken a carefully considered approach in developing data preparation tools for today’s data analytics users. To find out more about why Birst is far better “able” to meet the needs of data-driven enterprises, visit us on the web and follow Birst, an Infor company, on Twitter @BirstBI.

About the author:
Richard Reynolds is a Senior Director of Product Strategy, focused on the Financial Services Industry, at Birst. Richard is a veteran of the BI industry, having worked with analytics and data warehousing solutions from Business Objects, SAS, Teradata and SAP.   Richard specializes in dashboards, predictive, and prescriptive analytics for the modern enterprise.  In his BI role, Richard has guided many customers to obtain business value from their enterprise data.