Every visionary creation begins with a bold question. For Odyssey, a startup spearheaded by self-driving car veterans Oliver Cameron and Jeff Hawke, the question was this: What if creators could bring Hollywood-grade cinematic worlds to life with the power of generative AI? The answer lay not just in artificial intelligence but in a unique synthesis of groundbreaking technology, human exploration, and a new way of thinking about data.
A Journey from Self-Driving Cars to World-Building AI
Cameron and Hawke, armed with years of experience from companies like Cruise, Tesla, and Waymo, had spent their careers perfecting systems that navigated the chaos of city streets with superhuman precision. These systems relied on vast amounts of real-world data—collected by high-fidelity, multi-sensor arrays—to teach machines to safely maneuver through dynamic 3D spaces. This expertise became the foundation for their next venture: generative AI models capable of building 3D worlds from scratch.
But Odyssey’s ambitions stretch far beyond self-driving. Instead of teaching cars to navigate existing worlds, they aim to enable creators to generate entire worlds, complete with 3D control over scenery, characters, lighting, and motion. This bold vision demanded a completely new approach to data collection and model training.
Human-Powered Data Collection
While self-driving cars collect millions of data points during every journey, their scope is inherently limited to what wheels can access. Odyssey saw this as a challenge—and an opportunity. They needed to go beyond city streets, into forests, caves, beaches, glaciers, and architectural wonders that define the planet’s rich diversity.
To achieve this, the team developed an advanced data-capture system that is as mobile as the human body. Picture a lightweight backpack weighing just 25 pounds, equipped with six high-resolution cameras, two lidar sensors, and an inertial measurement unit. Designed in collaboration with optical imaging leader Mosaic, this device captures 360-degree views in stunning 13.5K resolution, complete with physics-accurate depth information.
Think of it as Google Street View—but for the untrodden paths of the world. From urban landscapes to natural wonders, this system allows human operators to venture where cars can’t, ensuring that every angle and fine detail needed for generative world-building is captured.
Training Models Inspired by Real-World Complexity
Odyssey’s approach to generative AI is rooted in their founders’ lessons from self-driving cars. Synthetic data, no matter how abundant, cannot replicate the richness of real-world interactions. Just as autonomous vehicles rely on real-world datasets for safety and performance, generative AI models require real-world 3D data to create worlds that feel alive and authentic.
The startup combines cutting-edge techniques like Neural Radiance Fields (NeRFs) and Gaussian Splatting, which address existing limitations in 3D modeling. While NeRFs are excellent for photorealistic reconstruction, they fall short on editability and relighting. Gaussian Splatting, on the other hand, excels at representing complex textures like fur but struggles with flat surfaces. Odyssey is pushing boundaries by developing a new 3D representation that merges the strengths of these techniques, creating models that are editable, scalable, and capable of achieving stunning visual fidelity.
An $18 Million Push Toward the Future
Odyssey’s groundbreaking vision has not gone unnoticed. The startup recently secured an $18 million Series A funding round led by EQT Ventures, with participation from GV and Air Street Capital. This brings their total funding to $27 million—a milestone that will allow them to scale their operations in California and beyond.
The fresh capital fuels their ambition to expand data collection efforts, capturing diverse landscapes and architectural marvels that will serve as the training ground for their generative AI. The company plans to venture into other states and countries, creating a global repository of rich, multimodal 3D data.
Bridging the Gap Between Data and Creativity
The implications of Odyssey’s work are profound. Their generative models promise to revolutionize filmmaking, gaming, and more by giving creators unprecedented control over digital worlds. Imagine a filmmaker generating a lifelike alien planet or a game designer crafting an entire ecosystem, all with intuitive tools powered by Odyssey’s AI.
Odyssey believes that to unlock the full potential of generative AI, you need more than just algorithms—you need the world itself as a training ground. “We think it will be impossible for generative models to generate Hollywood-grade worlds that feel alive without training on a vast volume of rich, multimodal real-world 3D data,” the company stated in a recent blog post.
The startup is also redefining the very building blocks of 3D representation. Traditional polygon meshes, while efficient for rendering, struggle with certain elements like vegetation and hair. NeRFs and splats, while innovative, have limitations in editing and light modeling. Odyssey aims to bridge these gaps with a unified 3D representation that integrates seamlessly into existing tools, supports direct editing, and scales computationally for dynamic and static scenes alike.
A Future Built on Exploration
With California as their first stop, Odyssey’s journey has only just begun. The startup is actively hiring researchers in the Bay Area and London, inviting those passionate about pushing the boundaries of AI and creativity to join their mission.
Odyssey’s story is not just about funding or technology; it’s about reimagining the way we create. By blending human exploration with AI innovation, they are charting a course toward a future where imagination knows no bounds—and cinematic worlds come alive at the click of a button.