From the microscopic intricacies of a tiny worm to the forefront of artificial intelligence, Liquid AI has quickly emerged as one of the most promising AI startups. In a significant milestone, the Cambridge-based company announced a $250 million Series A funding round, led by AMD Ventures, propelling its valuation to over $2 billion. This investment marks a critical step in Liquid AI’s journey as it aims to scale its innovative AI models and integrate them into real-world applications.
Founded in 2023 as a spin-off from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Liquid AI was established by Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus. Their inspiration came not from complex human neural systems but from the humble Caenorhabditis elegans, a one-millimeter worm whose brain—despite containing just 302 neurons and 8,000 synapses—exhibits an impressive capacity for adaptability and efficiency. Ramin Hasani, who now serves as the CEO, described his fascination with the organism, “It can move better than any robotic system that we have.” What started as curiosity led to the creation of liquid neural networks, a novel AI architecture that sets Liquid AI apart in an industry dominated by transformer-based models.
Liquid Neural Networks and LFMs
Liquid AI’s approach diverges sharply from the prevailing methods of artificial intelligence. Traditional transformer-based models, like GPT variants, rely on immense computational resources, hundreds of billions of parameters, and significant amounts of training data. In contrast, Liquid AI has developed Liquid Foundation Models (LFMs), inspired by the dynamic and probabilistic nature of biological neurons. These models can achieve similar or superior performance while operating more efficiently and requiring fewer resources.
The company’s LFMs feature a design grounded in dynamical systems, signal processing, and linear algebra, allowing them to process and adapt to information in real-time. Hasani and his team have proven that even with fewer parameters, liquid neural networks can focus on key signals, solving tasks with agility and precision. For instance, in tests involving self-driving cars, a liquid neural network with only 19 neurons successfully identified critical driving cues like the horizon and road edges, outperforming larger traditional models that were often distracted by irrelevant features.
Liquid AI’s models also boast the ability to adapt to new tasks with minimal retraining, demonstrating remarkable generalizability and efficiency. This unique capability has unlocked significant advancements in edge computing and resource-constrained environments.
“I think Ramin Hasani’s approach represents a significant step towards more realistic AI,” noted Kanaka Rajan, a computational neuroscientist at Harvard Medical School. Rajan emphasized that liquid neural networks offer a closer approximation to biological systems, leading to smarter and more efficient learning processes.
The STAR Architecture and Model Efficiency
As part of their advancements, Liquid AI has introduced Scalable Transformer Alternative Representations (STAR), a model architecture aimed at further enhancing efficiency and scalability. STAR models utilize adaptive linear operators that adjust dynamically based on input, rather than relying on static weight matrices. They also incorporate techniques like weight sharing across depth groups and Mixture of Experts (MoE) layers for selective activation, ensuring that only the most relevant components of the model are engaged during inference.
This architectural innovation enables STAR models to achieve near-constant inference time and memory complexity, even for lengthy inputs. While conventional transformer-based systems experience memory usage that grows linearly with input length, STAR models are optimized for a 32,000-token context window without requiring additional resources. This makes them particularly effective for tasks like document analysis, audio recognition, and other long-context applications.
The three key model variants released under the STAR architecture include:
- STAR-1B: A 1.3 billion-parameter model optimized for resource-constrained environments. It outperforms similarly sized transformer-based models.
- STAR-3B: A 3.1 billion-parameter model that surpasses previous 3B, 7B, and 13B models and compares favorably to Phi-3.5-mini while being 18.4% smaller.
- STAR-40B: A 40.3 billion-parameter model featuring MoE layers. While it matches the performance of much larger models, it activates only 12 billion parameters during operation, ensuring high efficiency.
These innovations reflect Liquid AI’s commitment to challenging the status quo of AI architectures by offering models that are not only powerful but also computationally efficient.
Real-World Applications and Strategic Partnerships
The Liquid Engine, developed by Liquid AI, designed to empower businesses with tailored and efficient AI solutions. At its core are Liquid Foundation Models (LFMs), which redefine memory efficiency, explainability, and scalability without compromising on quality. By enabling custom AI model design and training, the Liquid Engine adapts seamlessly to the unique needs of organizations, whether they require compact, resource-efficient models or complex systems to solve global challenges. Its real-world applications range from autonomous drones detecting wildfires to genome-based patient analysis and anomaly detection in manufacturing. Supporting multimodal capabilities like speech-to-text, vision, and DNA sequence processing, and offering scalable models from 1.3 billion to 40 billion parameters, the Liquid Engine bridges cutting-edge AI innovation with practical, cost-effective accessibility across industries
Liquid AI’s models are designed for versatility, catering to various industries and tasks. LFMs and STAR models excel at handling sequential data, making them suitable for applications such as video processing, time-series analysis, and natural language tasks. The company has already demonstrated its models in areas ranging from self-driving cars and drones to enterprise workflows.
With its latest funding, Liquid AI plans to accelerate the deployment of its technology across sectors like telecommunications, financial services, consumer electronics, e-commerce, and biotechnology. The company’s partnership with AMD, a leader in GPU and CPU technology, further highlights its readiness to scale. “Liquid AI’s unique approach to developing efficient AI models will push the boundaries of AI, making it far more accessible,” said Mathew Hein, Senior Vice President and Chief Strategy Officer of Corporate Development at AMD. “We are thrilled to collaborate with Liquid AI to train and deploy their AI models on AMD Instinct GPUs and support their growth through this latest funding round.”
Liquid AI’s roadmap includes scaling its compute infrastructure, enhancing product readiness for edge and on-premise deployments, and expanding its AI offerings across different data modalities. The company is also working on models that will address domain-specific challenges, including those in financial analytics and healthcare diagnostics.
A Valuation Reflecting Innovation and Momentum
The $250 million Series A funding, led by AMD Ventures alongside participation from OSS Capital and PagsGroup, signals strong confidence in Liquid AI’s unique vision and technological achievements. With its valuation surpassing $2 billion, the company now stands as one of the youngest unicorns in the AI sector.
Liquid AI’s success reflects a broader industry shift towards more efficient and explainable AI systems. Unlike the “black box” nature of many existing models, liquid neural networks offer better transparency in decision-making. This quality is especially important for enterprises and industries that require reliable, interpretable AI solutions. Hasani emphasized the company’s commitment to addressing these concerns: “We are getting into stages where these models can alleviate a lot of the socio-technical challenges of AI systems.”
Looking ahead, Liquid AI has ambitious goals. The Series A funding will enable the company to expand its team across research, engineering, and operations, ensuring it can deliver on its promise to scale LFMs and STAR models efficiently. Hasani and his team are also focused on making AI accessible to businesses of all sizes, integrating their products into mission-critical workflows.
As Liquid AI continues to grow, it faces the challenge of proving that its new approach can outperform and replace well-established transformer-based systems. While liquid neural networks excel at tasks involving temporal and sequential data, adapting them for other domains may require additional effort. Yet, with its science-driven foundation and growing list of industry partners, the company is well-positioned to showcase the real-world impact of its technology.
“At Liquid, our mission is to build the most capable and efficient AI system at every scale,” Hasani stated. “We are proud that our new industry-leading partners trust our mission; together, we plan to unlock sovereign AI experiences for businesses and users.”