Chalk, a San Francisco-based startup building infrastructure for real-time AI inference, has raised $50 million in Series A funding at a $500 million valuation. The round, announced Wednesday, was led by Felicis, with participation from Triatomic Capital and existing investors General Catalyst, Unusual Ventures, and Xfund. As part of the deal, Felicis founder Aydin Senkut will join Chalk’s board.
While most of the AI infrastructure landscape remains focused on training data pipelines and precomputed feature stores, Chalk is betting that the future lies in inference, the critical moment when AI models turn data into decisions.
“AI compute is shifting from training to inference,” said Marc Freed-Finnegan, Chalk’s co-founder and CEO. “That creates entirely new infrastructure needs that existing platforms weren’t built to handle.”
Chalk’s pitch is straightforward: where incumbents like Databricks and Snowflake specialize in batch processing and low-latency access to cached data, Chalk offers millisecond-level performance using fresh data at the time of inference. That capability is proving essential in industries like fintech, identity verification, and clean energy where decisions often need to be made in real time, not after a scheduled data refresh.
Chalk’s founding team, Freed-Finnegan, Elliot Marx, and Andrew Moreland brings a decade of experience solving large-scale data problems. Marx and Moreland met at Stanford before going on to lead data engineering teams at Palantir and Affirm. Together, they co-founded Haven Money, later acquired by Credit Karma. Freed-Finnegan launched Google Wallet and founded Index, acquired by Stripe and now known as Stripe Terminal.
What brought them back together was a shared frustration: real-time decision-making infrastructure didn’t exist.
“We saw this again and again—companies trying to make decisions in real time but stuck with batch pipelines and legacy feature stores,” said Freed-Finnegan. “No one was meeting the bar. So we built Chalk.”
Real-Time Inference, Not PowerPoint
Chalk’s real-time data platform enables engineers to write features in Python, which are then compiled into high-performance Rust and C++ pipelines. The company claims its Compute Engine can deliver predictions using fresh data in as little as five milliseconds—without the need for manual ETL or delayed feature materialization.
Customers seem to agree. Fintech platforms like MoneyLion use Chalk for real-time fraud detection and loan approvals. Healthcare staffing marketplace Medely relies on it for operational optimization. Identity verification firms like Socure and Doppel have integrated it into their AI pipelines.
“Chalk helps us deliver financial products that are more responsive, more personalized, and more secure,” said Meng Xin Loh, senior technical product manager at MoneyLion. “It’s a direct line from infrastructure to impact.”
For Doppel CTO Rahul Madduluri, Chalk made it possible to combine lightweight heuristics with more complex LLM-driven analysis in a single pipeline. “It lets us serve lightweight heuristics up front and rich LLM reasoning deeper in the stack,” he said. “We detect threats others miss without compromising speed or precision.”
Beyond performance, customers have noted a stark contrast in experience compared to larger platforms. In a recent LinkedIn post, co-founder Elliot Marx recalled a comment from a prospective customer: “With Databricks, we sat through multiple 1-hour presentations before seeing any actual code. The presenters couldn’t even answer basic product questions. The Chalk team wrote tailored examples in our first session.”
That hands-on approach is deliberate. Chalk embeds machine learning engineers directly into the sales cycle, allowing prospects to spin up custom sandboxes that mirror their actual production schema. “Our sales process feels more like a code review,” Marx wrote. “That’s how we win.”
Chalk enters a market dominated by multi-billion-dollar players. Databricks, last valued at $43 billion, is known for its lakehouse architecture and deep investments in ML lifecycle tooling like MLflow and MosaicML. Snowflake, with a $60 billion market cap, offers a cloud-native data platform for structured and unstructured data, with a growing portfolio of generative AI features under the Cortex umbrella.
But both platforms have their roots in batch analytics and training-oriented workflows. The real-time inference of Chalk will become just as foundational and that its purpose-built architecture gives it an edge.
Infrastructure as a Competitive Advantage
Chalk’s LLM Toolchain, one of its newest additions unifies structured and unstructured data, enabling use cases that combine raw inputs like screenshots, HTML, and URLs with model-driven predictions. The company also offers native vector search, automated evaluations, and integrations with leading LLM providers.
Rather than rely on SaaS-style telemetry, Chalk treats its customer support system as its primary analytics engine. Every support request is analyzed for insight: slow responses become optimization targets, unclear documentation is flagged for revision, and regression reports serve as signals for missing test coverage.
“Customer support is our product analytics,” said Freed-Finnegan. “Since we often deploy into customer-controlled environments, we can’t just harvest telemetry. So we treat every Slack message as training data.”
That customer feedback loop also extends to engineering. Support transcripts are parsed by Chalk’s internal models to identify documentation gaps and generate answers to common questions. The team is also building internal tooling that reviews product docs for clarity and coverage.
The company plans to use the new capital to scale its team, expand in New York and San Francisco, and build out a general-purpose compute framework for inference workflows. Freed-Finnegan said the goal is to make real-time decision-making as accessible as batch processing once was.
“What Stripe did for payments, we want to do for inference,” he said. “It should be easy to go from idea to production in a matter of hours.”
For now, Chalk remains tightly focused to millisecond-level inference, customer-first engineering, and a platform designed for a new generation of AI workloads.
“We’re not trying to replace the entire stack,” said Freed-Finnegan. “We’re building the part that makes real-time intelligence actually work.”