In a captivating talk at the recent Data Engineering Summit, Sudarshan Pakrashi, the Director of Data Engineering at Zeotap, addressed the elephant in the room for many data-driven organizations: the rising costs of managing and processing ever-expanding data sets. His solution was both simple and profound, changing the perspective on how we handle voluminous data.
Painting the Data Landscape
Pakrashi painted a vivid picture of the challenge, demonstrating the magnitude with a relatable example. Suppose an organization like Zeotap tracks a million users’ impressions across 100,000 ads daily, with each user assigned a unique 64-gig hash key. The amassed data would be around 160 GB daily, ballooning to 50 terabytes monthly.
This scenario represented a single use case, with actual situations involving several analytic needs and a consistently expanding data repository. The underlying problem here was the soaring storage and computation costs, proportional to the raw data size, burdening organizations financially.
Challenging the Convention
The novel approach Pakrashi proposed was based on challenging the need for absolute accuracy. In many operational contexts, like real-time alert systems or reporting dashboards, exact numbers aren’t necessary. Instead, a broad understanding of patterns and trends is often sufficient. This paradigm shift opened the door to new solutions that favor speed and scale over absolute precision.
Probabilistic Data Structures – The Solution
One such unconventional solution Pakrashi discussed was probabilistic data structures, specifically the Count-Min sketch. This is a probabilistic data structure estimating the frequency of various elements in a data stream, like impressions per user in this case. These structures use hashing techniques to allow efficient approximations of counts, trading off precision for dramatically reduced computation and storage needs.
Count-Min Sketch and Markov’s Inequality
Count-Min sketches employ principles from a statistical concept called Markov’s inequality. This bounds the likelihood of an event occurring far from its average occurrence. By judiciously choosing an average and a multiple of it, an upper limit can be defined for the error.
Applying these statistical principles, Pakrashi explained how, when a count in the sketch is estimated to be more than twice the average, the chance of this happening is less than 50%. Thus, the probability of the count being less than or equal to twice the average is at least 50%.
Quantifying the Error
Applying this theory to real-world scenarios involves defining an acceptable error limit, such as 0.1%. Based on this margin, probabilistic bounds can be manipulated to ensure the sketch’s count falls within the set margin with a particular confidence level. The result is an efficient system providing near-real-time analytics within an acceptable error margin at a fraction of the storage and computation costs.
The Impact
Pakrashi concluded his talk by sharing the transformational impact this approach had at Zeotap. Using probabilistic data structures, they managed to optimize efficiency and improve business outcomes significantly. The methodology promised to be a valuable addition to any data engineer’s toolbox, offering an innovative approach to handling and analyzing large datasets.
Conclusion
In an era where data continues to grow exponentially, Pakrashi’s talk at the Data Engineering Summit shed light on an inventive, cost-efficient strategy for managing large datasets. His insights provided food for thought for data engineers and leaders as they navigate the dynamic landscape of data management and analytics. The implementation of such cutting-edge methodologies promises to address pressing challenges and propel the industry forward.