Search
Close this search box.

From Precision to Efficiency: A New Perspective on Data Engineering

At the recent Data Engineering Summit, Sudarshan Pakrashi advocated for a groundbreaking approach to managing big data. His proposal of using probabilistic data structures could redefine industry norms, adding a fresh perspective to the discourse on data processing and storage.

In a captivating talk at the recent Data Engineering Summit, Sudarshan Pakrashi, the Director of Data Engineering at Zeotap, addressed the elephant in the room for many data-driven organizations: the rising costs of managing and processing ever-expanding data sets. His solution was both simple and profound, changing the perspective on how we handle voluminous data.

Painting the Data Landscape

Pakrashi painted a vivid picture of the challenge, demonstrating the magnitude with a relatable example. Suppose an organization like Zeotap tracks a million users’ impressions across 100,000 ads daily, with each user assigned a unique 64-gig hash key. The amassed data would be around 160 GB daily, ballooning to 50 terabytes monthly.

This scenario represented a single use case, with actual situations involving several analytic needs and a consistently expanding data repository. The underlying problem here was the soaring storage and computation costs, proportional to the raw data size, burdening organizations financially.

Challenging the Convention

The novel approach Pakrashi proposed was based on challenging the need for absolute accuracy. In many operational contexts, like real-time alert systems or reporting dashboards, exact numbers aren’t necessary. Instead, a broad understanding of patterns and trends is often sufficient. This paradigm shift opened the door to new solutions that favor speed and scale over absolute precision.

Probabilistic Data Structures – The Solution

One such unconventional solution Pakrashi discussed was probabilistic data structures, specifically the Count-Min sketch. This is a probabilistic data structure estimating the frequency of various elements in a data stream, like impressions per user in this case. These structures use hashing techniques to allow efficient approximations of counts, trading off precision for dramatically reduced computation and storage needs.

Count-Min Sketch and Markov’s Inequality

Count-Min sketches employ principles from a statistical concept called Markov’s inequality. This bounds the likelihood of an event occurring far from its average occurrence. By judiciously choosing an average and a multiple of it, an upper limit can be defined for the error.

Applying these statistical principles, Pakrashi explained how, when a count in the sketch is estimated to be more than twice the average, the chance of this happening is less than 50%. Thus, the probability of the count being less than or equal to twice the average is at least 50%.

Quantifying the Error

Applying this theory to real-world scenarios involves defining an acceptable error limit, such as 0.1%. Based on this margin, probabilistic bounds can be manipulated to ensure the sketch’s count falls within the set margin with a particular confidence level. The result is an efficient system providing near-real-time analytics within an acceptable error margin at a fraction of the storage and computation costs.

The Impact

Pakrashi concluded his talk by sharing the transformational impact this approach had at Zeotap. Using probabilistic data structures, they managed to optimize efficiency and improve business outcomes significantly. The methodology promised to be a valuable addition to any data engineer’s toolbox, offering an innovative approach to handling and analyzing large datasets.

Conclusion

In an era where data continues to grow exponentially, Pakrashi’s talk at the Data Engineering Summit shed light on an inventive, cost-efficient strategy for managing large datasets. His insights provided food for thought for data engineers and leaders as they navigate the dynamic landscape of data management and analytics. The implementation of such cutting-edge methodologies promises to address pressing challenges and propel the industry forward.

Picture of 김도한
김도한
AIM Research is the world's leading media and analyst firm dedicated to advancements and innovations in Artificial Intelligence. Reach out to us at info@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!
Join AIM Research's Annual Subscription Today

Unlock Unlimited AI Insights for Just $9999!

50+ AI and data science reports
All new reports for the next 12 months
Full access to GCC Explorer and VendorAI
Stay ahead with cutting-edge insights