Why you’re closer than you think to fulfilling the “Big Data Promise”
So you’re a CEO – or CTO or CIO – of an enterprise that has made massive investments in data and analytics based on the heady Big Data Promise of better decision making through a unified 360-degree view of your customers, suppliers and employees.
You’ve invested millions in data management and business process automation software, and were told by the Oracle’s, Siebel’s and SAP’s of the world that the massive volumes of data that their systems spit out would be a goldmine for your business.
The odds are that you’re not seeing the results you were led to expect, and your team’s telling you they need a lot more time and a lot more data — or both — to make the systems pay off.
As a businessperson, you know full well the risks involved in throwing good money after good (much less bad). But you also know that you’ve made too much progress generating data for your high-priced data scientists and their expensive analytics tools to abandon ship. If you think you’re within range of the Big Data Promise — maybe even 90-95% of the way there — shouldn’t you push a little more to get over the line?
I’ve been there as a CEO and asked myself the same question. And the answer, of course, is Yes – so long as you’re spending the incremental dollars on getting ROI as quickly and cleanly as possible. In fact, you must to do this if you want to remain competitive.
The good news is that it’s highly unlikely that you need to ditch your data management and analytics systems for something else.
Far more likely is that you are experiencing the “data plumbing” scenario that Tom Davenport, author of “Big Data at Work,” outlined recently in the Wall Street Journal’s CIO Journal: “The dirty little secret of (Big Data) is that someone has always had to do a lot of data preparation before it can be analyzed. It is often so difficult to extract, clean, and integrate data that data scientists can spend 90% of their jobs doing those tasks.”
“The more sources and types of data,” Davenport concludes, “the more plumbing work is required” to get to the analytics stage.
Or, as I would put it even more bluntly: Your expensive, state-of-the-art analytics and visualization tools and the people who use them are starved for high-quality data, an imbalance that undermines the totality of your investment. What you need now is an approach that rectifies this imbalance and gets you faster to realizing the value of the progress you’ve already made.
As a CEO, I always liked the idea of going after “low-hanging fruit” — the notion that there’s usually a simple approach to harvesting value from an organization’s most reachable assets. Fortunately, there is a new breed of enterprise technology companies whose mission is to help organizations do just that — and in my opinion Tamr is among the most interesting.
Think of Tamr’s contribution as simplifying and unifying your vast troves of data, essentially by creating a clean, unified reference for the full volume and variety of all your enterprise’s data sources, attributes and records — and all their accompanying duplications, errors and inconsistencies.
How does Tamr do this? In short, it begins with the “secret sauce” — a blending of two powerful ingredients:
- machine learning algorithms — which connect, automatically and at scale, the vast majority of the sources while resolving duplications, errors and inconsistencies among source data; and
- human expert guidance — which uses people in the organization familiar with the data to weigh in on the mapping and improve its quality and integrity.
Next, by connecting and “synchronizing” all of the data that enterprises have been aggregating and acquiring to feed their high-powered analytics tools, Tamr is embracing the data heterogeneity that is inherent in the modern enterprise.
Finally, and critically important to existing investments and ROI, Tamr is also embracing vendor heterogeneity. It gets down to confounding the “single vendor platform” philosophy that has driven IT investment decisions for decades. I’d submit that single vendor dogma is fundamentally at odds with the heterogeneous reality of the modern enterprise. So, to accommodate and exploit vendor heterogeneity, Tamr is architecting its system to:
- work with structured and semi-structured sources (internal or third-party);
- complement ETL software; and
- integrate (through a RESTful API) with a wide variety of business intelligence and analytics tools.
Taken together, the above ingredients are a recipe for enterprises to transform their existing data resources — wherever they are, in whatever format, regardless of their origin — into a rich stew that can be immediately consumed by your “starving data scientists” and their powerful data analytics tools.
So, I urge you to immediately take stock of the distance between your enterprise’s current reality and the Big Data Promise of a truly unified view of your customers, suppliers and employees. Don’t even think about ditching the investment that you have made in your existing systems or hitting the reset button. Consider paths and partners, like Tamr, that enable you to quickly bridge that last 5% – 10% to get to the ROI on your Big Data that you have earned, but have yet to realize.
Push a little bit harder. You are closer than you think.