Artificial Intelligence is no longer a futuristic concept—it’s now the backbone of digital transformation. As enterprises rush to integrate AI into their workflows, they’re faced with significant hurdles: fragmented tools, inefficient development cycles, and the inability to scale their applications. To address these challenges, DataStax has launched its AI Platform, built in collaboration with NVIDIA AI Enterprise.
By combining NVIDIA’s cutting-edge AI infrastructure with DataStax’s powerful real-time data management tools and visual development capabilities, the platform promises to reduce AI development time by up to 60% while improving AI workload efficiency 19x.
At AWS re:Invent 2024, Alejandro Cantarero, DataStax’s Field CTO, AI, and Jason McClelland, the company’s chief marketing officer, spoke with AIM Research to discuss the platform’s groundbreaking potential, real-world success stories, and its implications for enterprise AI.
The Rising Need for an End-to-End AI Platform
A key reason DataStax began developing its platform, which was announced in June and augmented by NVIDIA’s participation a few months later, was the realization, based on customer feedback and market discussions, that many enterprises were struggling with their AI tools. Companies face difficulties in determining which tools to use, how to integrate them, who needs to operate them, and how the components work together to create the end-to-end flow between them. They highlighted that most tools available today are built for individual developers, which work fine on a small scale. However, when applied to larger organizations, the complexity increases significantly. In an enterprise, different teams manage different stages of the lifecycle, making it essential to have tools that work together smoothly—regardless of which department is working on the project.
As McClelland explained:
“It breaks down when you’re part of a larger company. You need tools that integrate seamlessly so you don’t have to worry about handoffs.”
The 60% reduction in AI development time and the 19x performance improvement promised by the DataStax AI Platform are primarily driven by its integration with and optimization of various tools and services. Without the platform, developers have to learn and integrate multiple APIs for ingestion, model providers, and other services, which not only consumes significant time but also requires constant adaptation to rapidly evolving systems. This is especially challenging for larger enterprises, where teams have to manage different stages of the AI lifecycle. The platform streamlines this process by integrating APIs and enabling seamless swapping of models and embedding partners, drastically reducing the time spent on learning new systems and improving workflows. The 19x performance boost is largely attributed to NVIDIA’s optimized hardware and services. Their advanced model services, including embedding models and LLMs, are fine-tuned to run on specialized hardware, offering up to 2.5-4x improvements over regular GPUs. The DataStax AI Platform, Built with NVIDIA AI, eliminates this fragmentation. By integrating APIs NVIDIA NeMo services for data ingestion, model training, guardrails, and tools like Langflow for workflow orchestration, DataStax provides a unified solution for enterprises to build and deploy AI applications faster.
“Before, it was a multi-week process. With Langflow, you can understand the logic chains and compare output much faster,” McClelland explained.
Why NVIDIA? Combining Performance and Precision
At the heart of the platform lies a strategic partnership with NVIDIA AI Enterprise, which offers optimized AI tools and services. NVIDIA’s NeMo suite provides prebuilt tools for guardrails, data ingest, model serving, model training, fine-tuning, and evaluation, which DataStax integrates into its unified workflow.
Cantarero highlighted why this partnership is pivotal:
“We started our journey creating an AI platform as a service—focused on retrieval-augmented generation (RAG) patterns—we saw the potential to enable our customers to leverage their data to power apps like chatbots. NVIDIA provides a lot of the foundational pieces, like Lego building blocks, that can be used to build a broader set of genAI apps. Our goal in adding NVIDIA AI Enterprise to Langflow is to help enterprises adopt more advanced genAI capabilities to their applications and achieve better accuracy more quickly. ”
Key NVIDIA integrations include:
- NVIDIA NeMo Retriever: Ingest and prepare unstructured data for use in generation AI applications
- NeMo Guardrails: Programmable safety layers for hallucination protection, content moderation, and governance.
- NeMo Curator, Customizer and Evaluator: Simplifies fine-tuning and model evaluation, ensuring cost efficiency and precision for specific use cases.
- NVIDIA NIM: optimized AI models ready to be integrated into application flows.
Key Components: Unified Tools for Speed and Accuracy
The DataStax AI Platform brings together critical components to address the two most pressing enterprise challenges: rapid deployment and accurate, contextual insights.
1. Langflow: Visual AI Workflow Orchestration
Langflow, DataStax’s visual development interface, allows enterprises to build, test, and deploy AI and agent workflows with minimal friction. By visually modeling logic chains, teams can iterate faster and compare outputs seamlessly.
Jason McClelland emphasized its value:
“What used to take weeks—juggling APIs for ingestion, vector embedding, and testing—can now be done visually, in days or hours.”
2. Multimodal AI for Unstructured Data
Enterprises often grapple with unstructured multimodal data—from PDFs and tables to images and audio. Leveraging partners like Unstructured and NVIDIA, the platform extracts insights from multimodal content and connects them to business models.
Alejandro shared a real-world example:
“A student can upload a PDF, take a picture of a math problem, or type a query—our platform processes it all seamlessly.”
Physics Wallah: Scaling AI for 20 Million Students
A prime example of the platform’s real-world impact comes from Physics Wallah, India’s largest edtech platform, serving over 20 million students. Facing a 50x surge in traffic, Physics Wallah needed a scalable AI solution for personalized, real-time learning.
Sandeep Varma, Head of AI at Physics Wallah, shared the results:
“The DataStax AI Platform, built with NVIDIA AI, enables us to manage a 50x surge in traffic with zero downtime. It’s helping us democratize education with GenAI-driven, personalized learning at scale.”
The platform’s real-time vector search and dynamic embedding capabilities ensured Physics Wallah could deliver contextual, AI-powered learning experiences without sacrificing performance.
Flexibility Without Lock-In
Many customers express concerns about vendor lock-in, particularly when it comes to the high costs of some AI models. However, the platform addresses these concerns by offering alternatives that are more cost-effective while still maintaining competitive performance. Customers often ask how to compare different embedding providers and whether they need the full capabilities of a more expensive model. The flexibility to build bespoke models or switch between logic chains and model providers allows customers to avoid being locked into one expensive option, optimizing both cost and performance.
In addition to cost savings, customers have shown significant interest in fine-tuning and scaling models, and NVIDIA’s tooling enables much of this. Even though customers use NVIDIA’s tools to reach their desired solution, the resulting model is one they control—trained on their own data and not relying on a third-party service. This provides peace of mind to customers, knowing they have ownership of their models.
The platform’s ease of use is key to ensuring customers remain confident in their decisions. As McClelland explained, the interface provided by Langflow allows for seamless transitions. For instance, if a customer begins with NVIDIA’s solution and later decides to switch to another provider, such as OpenAI, the process is straightforward and simple. The ability to make changes quickly and easily gives customers the flexibility they need, ensuring they are able to adopt vendors that are the best fit for their use case.
While the platform integrates deeply with NVIDIA AI, DataStax remains committed to flexibility. Enterprises can:
- Adopt pre-trained models (OpenAI, Anthropic) or train in-house models.
- Deploy across cloud providers like AWS, Azure, and Google Cloud, or self-managed environments.
- Use Langflow to swap between models and workflows seamlessly.
Cantarero emphasized the platform’s adaptability:
“Vendor lock-in is definitely a concern for many of our prospects and customers. We operate both as a standalone platform and on top of major providers like Microsoft, Amazon, Google, and NVIDIA. We’ve also partnered with other large LLM and model providers, offering flexibility to our users. NVIDIA is our premier partner, but when it comes to ingestion, for example, we also work closely with Unstructured for ingestion optimization. Our goal is to provide what’s best for your business, regardless of the provider.”
From Production to Transformation
McClelland emphasized the company’s goal of being the first enterprise offering to market as new AI technologies emerge:
“We work closely with market leaders like OpenAI, NVIDIA, and innovative startups like Unstructured to stay ahead. For example, when RAG first emerged, we were the first to provide a commercial RAG app development solution. For large companies, particularly Fortune 100s, this approach has worked because they need something that is licensed, supported, and guaranteed to work.”
McClelland also acknowledged the risk of making the wrong bet:
“The challenge is when the market moves quickly, and by the time you build something, it’s already outdated. That’s why we continuously adjust how we approach the evolving AI landscape, while still supporting customers who are still catching up.”
Cantarero elaborated on the evolving GenAI landscape, noting that many companies are still in the early stages of adoption:
“Right now, a lot of companies are still experimenting with POCs, but next year, we’ll see most companies putting something into production. Next year will be the year of production, and the year after will be the year of transformation. By then, companies won’t think about AI as a separate thing like buying storage or computing power—it’ll just be an integrated part of their processes.”
The DataStax AI Platform, Built with NVIDIA AI, delivers a bold vision for enterprise AI. By simplifying development, integrating multimodal capabilities, and enabling real-time performance at scale, it redefines how businesses build and deploy AI solutions.
As Chet Kapoor, chairman and CEO of DataStax, puts it:
“We’re unlocking unmatched speed of development, helping enterprises innovate at scale. This platform changes the trajectory of enterprise AI.” For enterprises like Physics Wallah, the results are already transformative. As organizations increasingly adopt AI, the DataStax-NVIDIA collaboration offers a clear, unified path to faster, smarter, and more impactful innovation.