My first car cost 100 British pounds. It was an Austin Mini and had a 1 liter engine so it didn’t go very fast. Back in the 70s, I guess partly inspired by Starsky and Hutch, it was de rigueur to put go-faster stripes on your car in order to make it at least look like it went faster than it actually did!

The need for speed has always been a challenge for Business Intelligence too. We’ve never been able to analyze all of our data at the speed we wanted to. Large and complex queries could take hours to run and often would time-out forcing the user to either run them again wasting more time or just give up and make a decision based on gut feel.

To get around this issue and with the hardware of the time, you could either aggregate your data losing some of the detail or just not store all of the available data and archive the excess. One solution to the problem was the development of OLAP cubes. The theory went that by storing pre-aggregated values for all possible combinations of dimensions and measures, you could report on data quickly because the tool could go straight to the value you were looking for, without running an enormous query.

The downside of this approach was a huge maintenance overhead. Cubes had to be rebuilt every time the data changed, every night or how often you could refresh them, some literally exploded due to the vast combination of all possible values that could be generated and cubes proliferated across the business just like spreadsheets so nobody was quite sure which was the right one to use. This approach sparked the ROLAP v OLAP wars where various vendors would attempt to discredit the others’ approaches.

No more compromise

Fast forward to 2014 and it looks like we may finally have the answer to the compromises we’ve historically made in business intelligence. The development of in-memory databases looks set to get rid of the speed issues once and for all, no matter how big the data set. When data is stored in main memory, queries against billions of rows can be returned in less than a second and complex calculations can be performed without paying a huge performance penalty as data is read and written to disk.

Now in-memory to support BI is not a new thing. Some of the data discovery vendors have been using personal in-memory structures for some time to speed up and make analysis more interactive. The problem with this is that we’ve just reinvented those OLAP cubes again. Lots of personal data sets that cannot easily be shared across the business, scalability issues as they are limited by the PCs own memory limits and the inevitable anarchy that results from everybody having their own view of the business. Even when stored on a server, each user requires their own memory space and cannot scale beyond tens of users.

That has changed in the development of enterprise-class in-memory data platforms like SAP HANA. Capable of storing terabytes of data in memory thanks to a combination of falling memory prices and sophisticated compression, now we can get fast access to all of our data.

Think even faster with Birst on HANA

Today we are announcing the Birst supports SAP HANA both as a data source if you already have your data warehouse in HANA and uniquely also for automated data warehousing from Birst. It’s this second point that is so interesting because there is no point having the fastest database on the planet if it still takes you months to deliver information to the business.  With Birst, you can have your data ready for reporting in a matter of hours.

Clearly the major technical benefit is speed, in SAP’s own benchmark test on a 100TB 100 billion row data set, almost all queries ran in less than a second and of course the real business benefit is what you can do with that speed. More analyses run on more data can give you insights into your business that were just not possible with these old compromised systems and although HANA doesn’t come cheap, it is available to rent by the hour with HANA ONE on AWS if budgets don’t stretch to those multi-terabyte systems.

One particular aspect of Birst that stands to gain greatly from the speed of HANA is Birst Visualizer where the speed improvements will dramatically improve the interactive nature of exploring and visualizing data.

Finally, it’s not all about faster queries. Birst brings a number of benefits to HANA customers in addition to the automated warehousing described earlier. HANA customers can now choose a low TCO native cloud BI tool with a consumer-grade user experience and an enterprise BI platform that meets all users’ requirements unlike limited data discovery tools like SAP’s Lumira.

Whether you use Birst on HANA to support real-time risk and compliance reporting, run business process analysis across all of your customer data or improve your inventory planning, there is no doubt that the combination of agile BI from Birst and speed of SAP HANA’s in-memory data platform make an exciting combination and are set to innovate the way we deliver BI to the business.