India English
Kenya English
United Kingdom English
South Africa English
Nigeria English
United States English
United States Español
Indonesia English
Bangladesh English
Egypt العربية
Tanzania English
Ethiopia English
Uganda English
Congo - Kinshasa English
Ghana English
Côte d’Ivoire English
Zambia English
Cameroon English
Rwanda English
Germany Deutsch
France Français
Spain Català
Spain Español
Italy Italiano
Russia Русский
Japan English
Brazil Português
Brazil Português
Mexico Español
Philippines English
Pakistan English
Turkey Türkçe
Vietnam English
Thailand English
South Korea English
Australia English
China 中文
Canada English
Canada Français
Somalia English
Netherlands Nederlands

Deva-3 -

Published by: The AI Frontier Reading Time: 6 minutes

The model hallucinated cars sliding, pedestrians walking cautiously, and brake lights flashing. It had never seen snow, but it had learned friction and low-traction behavior from dry roads. It generalized the concept of slipperiness. deva-3

Have you worked with video prediction models or world models? Let me know in the comments if you think DEVA-3 is overhyped or under-discussed. Disclaimer: This blog post discusses a hypothetical or emerging model architecture for illustrative purposes based on current research trends in world models (e.g., DreamerV3, UniSim, GAIA-1). No official "DEVA-3" product from a specific company is referenced. Published by: The AI Frontier Reading Time: 6

Current AVs rely on "predictive models" that assume other drivers are rational. DEVA-3 simulates irrational behavior. It can predict the "jerk" who cuts across three lanes without a blinker because it has seen that episode 10,000 times in training data. Wayve and Ghost Autonomy are rumored to be testing DEVA-3 variants on public roads in London right now. Have you worked with video prediction models or world models

Imagine an NPC that doesn't follow a script. In a sandbox game, a DEVA-3-powered NPC could watch you build a fortress, predict you will attack at dawn, and fortify its own walls accordingly—without a single line of explicit logic code. The "Aha Moment" from the Research Paper I spoke with a researcher on the team (who requested anonymity due to an upcoming IPO). He told me about their internal "Genesis Test."

It is called .

For the last decade, the holy grail of robotics and autonomous driving has been a simple question: How do we teach machines to predict the future?