Big Data Technology
Anyone who’s been following the rapid-fire technology developments in the world
that is becoming known as “big data” sees a new capability, product, or company founded literally every week. The ambition of all of these players, established and newcomer, is tremendous, because the potential value to business is enormous. Each new arrival is aimed at addressing the pain that enterprises are experiencing around unrelenting growth in the velocity, volume, and variety of the data their operations generate.
What’s being lost, however, in some of this frothy marketing activity, is that it’s still early for big data technologies. There are vexing problems slowing the growth and the practical implementation of big data technologies. For the technologies to succeed at scale, there are several fundamental capabilities they should contain, including stream processing, parallelization, indexing, data evaluation environments and visualization.
When evaluating big data technology, it can be valuable to ask companies about their ability to deliver some of these fundamental capabilities. If you get an unsophisticated answer, you may find that the company is not as serious or capable as you might have expected. (For my research on this topic please see: Designing a
Scalable and Agile Big Data Platform.)
In this article, we examine some of the big data requirements that are partially defined or in early stages of maturity. Any big data vendor worth considering should be able to address these requirements now or in the near future, or confidently explain their position. We sat down with Mike Driscoll, CTO of Metamarkets, a big data company that delivers predictive analytics solutions for digital media, to develop a checklist for evaluating new solutions and their fit criteria against the challenges of big data:
Some general questions to begin the evaluation process:
; Does the solution allow for stream processing, and incremental calculation of
; Does the solution parallelize processing and take advantage of distributed
; Does the solution perform summary indexing to accelerate queries of huge
; What are the solution’s data exploration and evaluation environments that
enable a quick understanding of the value of new datasets?
; How does a solution directly provide or easily integrate with visualization
; What is the strategy for verticalization of the technology?
; What is the ecosystem strategy? How does the solution provider fill the gaps
in its capabilities through partnerships?
Getting these questions answered will put most vendors to the test and help improve your understanding of the technology you are evaluating.
As the pace of business has increased, and the number of instrumented business processes has expanded, increasingly our attention is focused not on “data sets,” but on “data streams.”
“Decision-makers are interested putting their finger on the pulse of their organization, but to get answers in real time; they require architectures that can process streams of data as they happen,” Driscoll says. “Current database technologies are not well suited to do this kind of stream processing.”
For example, calculating an average over a group of data can be done in a traditional batch process, but far more efficient algorithms exist for calculating a moving average of data as it arrives, incrementally, unit by unit. If you want to take a repository of data and perform almost any statistical analysis, that can be accomplished with open source products like R or commercial products like SAS. But if you want to create a set of streaming statistics, to which you incrementally add or remove a chunk of data as a moving average, the libraries either don’t exist or are immature.
“The entire ecosystem around streaming data is underdeveloped,” says Driscoll.
In other words, if you’re talking to a vendor about a big data project, you have to determine whether this kind of stream processing is important to your project, and if it is, whether they have a capability to provide it. This axiom extends all the way down, to not just the analytical algorithms that run over the streams, but also to the way in which those streams are queued, ingested, managed and ultimately processed.
Many architectures exist for ingesting and queuing data streams, some of which are proprietary. TIBCO, Esper and ZeroMQ all offer solutions. But those solutions are only about moving packets of data around. Actually doing analysis of the streams requires practitioners to build on a lower level, for which there is another rapidly evolving toolset.
There are many definitions of big data. Here’s a useful one: “Small data” is data that
fits in-memory on a single desktop or machine with a capacity of between 1 GB and 10 GB of disk space. “Medium data” fits on a single hard drive of 100 GB to 1 TB in
size. “Large data” is distributed over many machines, comprising 1 TB to multiple petabytes.
“If you want to work with distributed data, and you expect to have any hope of processing that data in a reasonable amount of time, that requires distributed processing,” Driscoll says.
Parallel processing comes to the fore in distributed data. Hadoop is one of the better-known examples of distributed or parallelized processing. Hadoop can do more than distributed processing. It can also conduct distributed queries, which have been a subject of interest recently for designers of massively parallel processing (MPP) databases, whose object is to take a query and parallelize it across a set of nodes. Each of those nodes does partial work for the query in parallel, and then combines those partial answers into a single unified answer, Driscoll explains. Parallelizing queries is not a simple affair. When something is being analyzed in a parallel stream, each new unit of data must be combined with an existing unit of data in order to produce an answer.
Therefore, if a vendor is trying to sell you a solution for addressing big data at scale, their salespeople should be able to articulate their special secret sauce and strategy for parallelization.
“One of the most important features that current data warehouse vendors must offer is the ability to do parallel copy from Hadoop into their warehouse,” Driscoll says. “So, whether it’s EMC Greenplum offering the ability to do distributed parallel copy from Hadoop to Greenplum, Netezza or Oracle, the ability to parallelize data transfer is a critical feature.”
Hadoop has a massively distributed file system and can support distributed queries on top of it. But it does not inherently support parallelization—running a parallel
process on Hadoop without an algorithm for optimizing queries can significantly slow down the process, taking minutes to return an answer. This is acceptable for some queries, but it won’t support real-time analytics in a big data world. The power
and speed of that algorithm will be a determining factor of the robustness and cost of the solution, and that should be appropriately scaled to your needs, says Driscoll. Summary indexing
Summary indexing is the process of creating a pre-calculated summary of data to speed up running queries. The problem with summary indexing is that it requires you to plan in advance what kind of queries you are going to run, so it is limiting. The most common form of summary indexing is the star schema used to support speedy searches in data warehouses. A star schema prioritizes one master dimension (such as location or product) in advance of running a multi-dimensional data cube, organizing subordinate dimensions in relation to the master. The technique works well, but has one huge problem. If you want to ask a new question, it takes a long time to reconfigure the schema and associated data cubes and recompute them. When we begin to ask questions of all our data, not just the
structured data, it is practically impossible to create a star schema to answer every possible question. The problem is not pre-processing; it’s the difficulty of
reconfiguring the pre-processing as needed to ask new questions and get a speedy response.
The ideal solution would easily adjust the summaries being created as new questions arose. With a quickly created, summarized form of the data, it would then be possible to use data-analysis tools such as QlikView, Tableau, or TIBCO Spotfire for exploration and analysis. But there is currently a gap in available tools to make this summary creation easier, as many available tools don’t reach down to
machine-level data, says Driscoll. The result is that the IT department becomes involved in building a custom query.
Some help is on the way for this problem. Vendors such as Splunk have emerged with a solution based on their search language that makes creating summary indexes far faster than other approaches, like star schemas. The designers of technology like SAP HANA, 1010 Data, and Metamarkets recommend an in-memory approach that completely abandons summarizing, by keeping vast amounts of data in in-memory systems.
But data volumes are growing fast and the need for summarizing will never go away completely. For the short and medium term, vendors must have a strategy for agile creation of summary indexes.
Data evaluation environments
How does your vendor’s solution understand, or allow you to understand the meaning of new datasets and incorporate them into your analysis?
For example, if your business has a retail store, and you are studying transactions, you could pick up the movements of people around the web site, or even around brick-and-mortar stores through opt-in GPS signals and cell-phone tracking. Once you acquire the data:
; How do you incorporate and understand what that data can tell you?
; How do you develop a model of the store that will help you analyze customer
; How do you understand when those movements become events?
; How fast can you figure that out—before the customer leaves the store?
To answer these questions, your solution needs to be able to join disparate datasets. Very few vendors have a distinguished capability to join datasets. And, just as the most critical areas of a building are at the joints, the same is true of data architecture. Where data interfaces, tremendous value can be unlocked, Driscoll says.
“The holy grail for anyone in the online retail space is to understand the connection between online impression events that lead to actions, such as clicks, or some level of engagement that eventually lead to purchasing behaviors, which eventually lead
to long-term customer adoption,” Driscoll says. “Right now, all of these datasets live in different places. American Express knows where you’ve purchased your Starbucks coffee and Foursquare knows where you checked in at Starbucks, and Yahoo! knows when you clicked on an ad discount for a Starbucks latte on a hot summer day. And yet, people are struggling to thread these various data streams together.”
As enterprises begin to draw in disparate data feeds—particularly mobile data
feeds—it’s critical that a vendor has a solution for rapidly joining disparate datasets,
because the information they contain is critical for enterprises.
There are two broad categories of visualization tools, according to Driscoll. Exploratory visualization describes tools that allow a decision-maker and an analyst to explore different axes of the data for relationships, which usually involves some kind of visual “mining for insights.” Tools such as Tableau and TIBCO Spotfire, and to a lesser extent QlikView, fit into this category, Driscoll says. Narrative visualizations are designed to examine a particular axis of the data in a particular way. For instance, if you say want to look at a time series visualization of sales broken up by geography for an enterprise, a format for that visualization can be pre-created. The data can be played back month-by-month for every geography, and is sorted into a pre-cast formula. Vendors such as Perceptive Pixel fit into this category.
In narrative visualizations, “Certain knobs are free to explore the data, but it’s not completely open to ask any question,” Driscoll says. “These visualizations are designed to tell a certain story about the data, just as certain pre-computations or reports are designed to tell a certain story. And some tools are better for the first, ad-hoc exploratory model, and others are better for the second, constrained narrative.”
Mind the Verticals
There are nearly as many types of decision-making needs in different verticals as there are ways to collect, process and analyze data. Vendors should be wary that different decisions makers within an organization and between verticals have different kinds of visualizations that they are accustomed to seeing.
“Any vendor that wants to serve the needs of those decisions-makers ought to be
well aware of what those expected narratives are, because that will speed the adoption of that visualization tool,” Driscoll says, citing the preference for candlestick plots in the financial services industry.
The largest most successful companies all spend tens of millions creating ecosystems around their products. The ecosystems are supported by product
features and business models that allow the product to do its job but also work with other technologies or partners who extend the product or craft it to special uses. If a product doesn’t have an ecosystem strategy, you may find that it is hard to adapt to your needs and that finding expertise to help with implementation and configuration may be hard to come by.
This list of requirements for big data technology is not exhaustive, but it is a good start. Using these topics when evaluating big data technology will only lead to deeper understanding.
Dan Woods is chief technology officer and editor of CITO Research, a firm focused on the needs of CTOs and CIOs. He consults for many of the companies he writes about. For more stories about how CIOs and CTOs can grow visit
This article is available online at: