LLMs Don't Know Anything
Here's the thing about ChatGPT, Claude, Gemini, and every other chatbot: they're autocomplete engines running on steroids. They've digested the internet and learned which words tend to follow which other words. The output sounds smart. But there's nothing behind it.
Ask an LLM to describe a ball falling off a table and you'll get a perfectly written paragraph. But the model has zero concept of gravity. It doesn't understand that unsupported objects fall. It just knows the sentence "the ball fell" shows up a lot after "the ball rolled off the edge." That gap between fluency and understanding is exactly what LeCun has been screaming about for years.
JEPA: What AMI Is Actually Building
AMI's technical backbone is JEPA, Joint Embedding Predictive Architecture, a framework LeCun published in 2022. Instead of predicting the next word or pixel, JEPA predicts abstract representations of future states. Think of it as an AI that builds an internal physics engine rather than memorizing descriptions of physics.
By operating in embedding space instead of raw data, JEPA sidesteps the nightmare of pixel-level prediction and zeros in on what actually matters: cause and effect, spatial relationships, object behavior, structural reasoning. The kind of stuff you need if you want a robot to navigate a warehouse or an AI to evaluate whether an aircraft component design will hold up under stress.
The Money and the People
The $1.03B round values AMI at $3.5 billion. Backers include Bezos Expeditions, Eric Schmidt, Tim Berners-Lee, Mark Cuban, NVIDIA, and Samsung. The team is stacked: Alexandre LeBrun (founder of medical AI startup Nabla) runs day-to-day operations as CEO, Saining Xie leads research as chief science officer, and LeCun himself chairs the board.
They're setting up shop across Paris, New York, Montreal, and Singapore.
Don't Expect a Product Anytime Soon
LeCun has been blunt: year one is pure research. No product roadmap, no quarterly demos. The target domains, manufacturing, aerospace, robotics, healthcare, are fields where understanding physics isn't optional and where language-only models hit a wall fast.
Whether JEPA-based world models actually work at scale is still an open question. But LeCun has a billion dollars and decades of being right about things other people dismissed. He called the deep learning revolution before anyone else took it seriously. Now he's saying the LLM era is a detour, not a destination.
This time, he's putting his money where his mouth is.

