Re-imagining Analytics

Sridhar Gopalakrishnan, CEO & Founder of Xurmo Technologies

In a data-rich world, it is reasonable to imagine decisions being driven by careful analysis of data, using the power of increasingly cheap computing. In such a world, decision making is contextual, objective and hopefully, more prudent. Particularly, businesses should be able to navigate complex scenarios by consulting data-driven algorithms, rather than merely relying on human experience and ingenuity. All this should lead to greater efficiency, lower risks, better bottom-lines and lower cost of capital. Businesses would be free to test out creative hypotheses quickly and fearlessly. Ideas would get the front seat in management councils while business-as-usual decision-making would get relegated to computing infrastructure.

Ours would be a transformed world and we would be the better for it. At least, that is the promise of Big Data.

While we live in such a data-rich world and have cheap computing in abundance, we are yet to see ubiquitous, data-driven decision-making. The transformation is slow, tentative and prone to shuffles and stumbles. It remains the privilege of a few corporations, who have the vision and stamina to ride out the long cycles of experimentation and customised analysis. In fact, less than 1% of enterprises world-wide have been able to commit investments for evolving a decision-culture centred around Big Data.

Drawing a parallel with similar industries – such as the data centre or business applications, which used to be driven by on premise server farms and outsourced software development but are increasingly shifting to the cloud and SaaS models – it seems natural to anticipate a shift in analytics as well. Data-driven decisions should be available on-demand, contextualised to the data of an enterprise but without the cost, effort or risks associated with the incumbent model of customised analysis. Enterprises of any size, industry or location should be able to hook in their data to analytical infrastructure and consult algorithms for decision recommendations.

And yet, experiencing data-driven decisions remains a challenge for enterprises globally.

So why has the shift not happened yet? That is a billion-dollar technology question.

After all, analytics per se is not new. Ever since the days of the first database and digitized transaction records, business managers have sought and got business intelligence. Statistical models and algorithms also have been around for decades!

The answer lies precisely in the legacy of old-world analytics. Traditionally, analysis has been the domain of statisticians and scientists. The computing industry’s answer to enable them to rapidly discover patterns in data was to invent the database schema and SQL to query such schema. The idea is to create a unifying structure in which to capture an enterprise’s data and have a language to retrieve relevant data from that structure. Analysts who know the data and its structure can locate data as needed and create insightful reports for business to decide on.

The SQL-based world requires analysts to know their data, understand it and transform it to insights and recommendations for the business user. It requires data to be made available in a central, clean structure, which should be defined before analysts can lay their hand on data in the first place. All this takes time simply because the human activity of defining data schema, discovering data and transforming it to insights cannot be outsourced to machines.

The legacy of SQL has crossed over to the Big Data world, but with much worse consequences. Here the data comes in much larger quantities, from various sources and at an alarming clip. This is a chaotic world where data and requirements change rapidly, reflecting the reality of the world we live in. The new generation of analytical systems for Big Data are still built on the premise of SQL, where analysts will employ their craft one enterprise, one data environment and one business requirement at a time. They still must collect, organize, discover, transform and analyse data manually, step after painfully-slow-step. While we have computing infrastructure to handle massive amounts of data and process them quickly, our approach to using data to help business is antiquated.

To fulfil the promise of Big Data, we need to re-imagine the analytics process itself. What if any data could be brought to play and transformed automatically to deliver insights and recommendations to business? What if analysts and scientists could spend their creativity anticipating business needs and creating analytical models for an industry without the constraint of having to know data of each enterprise? What if their models could be re-used many times over across enterprises and yet, in the context of each enterprise?

What if analysis could be on-demand?

To enable on-demand analysis, contextualised to an enterprise, what is needed is an intelligent data management layer that breaks the boundaries imposed by the premise of SQL. The human activity mandated by SQL should be taken over by computing algorithms so that the journey from raw data to business decision is quick and effortless.

If one were to imagine the requirements of such an intelligent data management layer, it would probably have these capabilities:

  1. The ability to consume any and all data, automatically without manual scripting of business rules. The intelligent data management layer should be able to consume Big Data without compromising on any of the Vs.
  2. The ability to store all data in a uniformly discoverable manner without manual schema definition.
  3. A new language/ interface to express high-order logic of analysis which can be interpreted by the system to perform hitherto manual tasks of analysis.
  4. An intelligence module to interpret the high-order logic of analysis and use it to locate, transform and perform analysis on data as required by an on-demand analytical model.

 

This system would indeed be the result of a comprehensive re-imagination of the traditional analysis process.

 

The Big Data analytics industry is still a fledgling and notwithstanding the hype, is caught in the dilemmas that inflict any new concept in its teething years. A section of the industry is still figuring out use cases that can motivate large scale adoption based on justifiable RoI. Another section is busy figuring out ways to build competencies at scale to meet the impending need for data scientists. Yet another group of computer scientists are busy inventing technologies to make data storage and processing cheaper and faster. More tools are being invented to service the anticipated mass adoption. And thanks to such initiatives in these formative years, Big Data is already a close-to-$100 Billion industry.

What is sobering however, is the fact that this large industry is propped up on the shoulders of a small number of intrepid, early-adopter enterprises – a mere few thousand. This means we have some way to go before the opportunities in analytics become available to a larger audience. And that is precisely why a re-imagination of traditional analysis processes deserves merit.

So how would a newly imagined system of on-demand analysis manifest?

For one, this technology would mean a disruption in the incumbent business model of project-based pricing. Customers would only pay for the value they get and when they get it. The direct result would be an acceleration in sales cycles. Today, some of the leading analytics services vendors manage to close a few dozen deals per year. With on-demand, subscription-based analysis it is not unreasonable to imagine quicker sales cycles much like how cloud infrastructure is provisioned in minutes, online. With lower costs and lesser uncertainties of outcome, customers would be emboldened to try more data-driven decision recommendations leading to faster adoption of an analytics-based management culture.

On the supply side, it would open up opportunities for an analytics marketplace where vendors who are experts in specific use-cases can focus on differentiating their models rather than on the sales process itself. As the free market for analysis gets established, customers would have more choices which would naturally lead to lower price points and easier adoption for enterprises.

We would then be well on our way to ubiquitous, data-driven decisions.

Leave a Reply

Your email address will not be published. Required fields are marked *