We are on track to a world with billions of robots
To get there we need robots to be imbued with an understanding of the physical world
They need to be able to adapt to the infinite combinations of object and environment states we see in the real world
@rhoda_ai_ did this by training their model on 100 million+ hours of video data
These are the video demonstrations you should be impressed by, not just the videos of robots dancing from replaying motion capture
Robots are virtual intelligence embodied in physical form but we will also have physical beings transversing to virtual worlds
Dario Amodei, CEO of Anthropic
"We do not see [AI] hitting a wall. This year will have a radical acceleration that surprises everyone." Exponentials catch people off guard. "We are at the precipice of something incredible. We need to manage it the right way."
On where markets are wrong: "It's already big and it will get 1 million times bigger."
On revenue scale: Anthropic was at ~$100M run rate 2 years ago. Now at $19B run rate.
It is incongruent to believe AGI will happen and not believe there will be permanent, widespread labor displacement
Counterarguments are always rooted in reasoning by analogy, not first principles
There are a lot of smart people that are going to get Thanksgiving turkey'd

The last crypto bubbles were an extreme misallocation of capital especially in the backdrop of the 4th Industrial Revolution enabled by AI
The bubbles were driven by excess liquidity seeking world changing technology. This technology has arrived in the form of AI and Robotics which will drive and capture real economic value. Given the effects on productivity, efficiency, and development speed, we will see a new normal of double digit economic growth across both new companies and also tangential companies
It is not too late to get involved, the 4th Industrial Revolution is just starting
1/ General-purpose robotics is the rare technological frontier where the US / China started at roughly the same time and there's no clear winner yet.
To better understand the landscape, @zoeytang_1007, @intelchentwo, @vishnuman0 and I spent the last ~8 weeks creating a deep dive
2025 was an insane year for robotics research
Long time model architecture/training challenges were solved and major progress was made on data collection techniques, understanding data quality, and data recipe. This gives Physical AI companies the confidence to finally start investing in large scale data collection.
You saw companies like Figure, Dyna, and PI reach >99% success rates in real life deployments in diverse settings by leveraging RL innovations. Many frameworks were developed for self improving and self-recovering robot models. Researchers figured out how to prevent overfitting in VLA fine-tuning while retaining generalist capabilities. Which means we can build toward generalist models by merging specialist models.
Robots can also move much more agilely from methods like action chunking and FAST tokenization. We see robots able to exhibit smooth full body control at human speeds and not slow or choppy movement.
Roboticists showed how to effectively fuse multi-modal sensor data for huge policy improvements. Integrating vision, language and tactile data was challenging, but doing so opens the door for many contact rich tasks that require a granular sense of force. Force awareness also solves for common issues like visual occlusions.
System 1/2 architectures were hardened to handle long-horizon planning/ task orchestration which enable robots to perform jobs that require series of tasks. Gemini Robotics-ER 1.5 introduced Chain-of-Thought reasoning to physical agents, allowing them to parse constraints and evaluate Semantic Safety.
Memory advancements allowed robots to maintain long-term spatio-temporal reasoning, breaking the "memory wall" with brain-inspired algorithms. NVIDIA's ReMEmber used memory-based navigation, while Titans + MIRAS enabled test-time memorization for sustained performance.
Advancements in the foundation model space also continue to compound progress in Robotics. Better VLMs means VLAs with better spatial understanding and data labeling and processing pipelines that can massively increase in throughput. World models are starting to show promise in data augmentation and policy evaluations.
2025 gave us a small taste of what data scale could do. A glimpse into the future with robots exhibiting emergent intelligence such as zero-shot affordance mapping, visual force sensitivity, and all sorts of general physical reasoning
2026 we get to experience physical AI with 100x the data scale
It is an imperative for America to start “overproducing” robots ASAP
We are quickly reaching the point where the AI is good enough at which point the demand for robots near instantly jumps from tens of thousands to hundreds of millions
But scaling robot production isn’t as instantaneous as scaling LLM instances. It will take years to scale production capacity to the level of just millions.
We desperately need more infrastructure for US robot manufacturing. The time to finance this is NOW
Generalized Autonomy is the end game for robotics but before we get there we will have a phase of remote immigration
Full disclosure. I just bought some Aster today, using my own money, on @Binance.
I am not a trader. I buy and hold.
Robot models don't have the benefit of internet scale data that LLMs do
Tesla, Google, Meta are all massively ramping up creation of proprietary robot training data
Token incentives could potentially create the largest open source robot training data set
Cool work by @virtuals_io
Robotics strategy is a major long term priority for every major tech CEO in the world right now
Imo maybe only Google will be able to pull off the robot foundation model component because they’ve spent a decade+ working on it with significant resources
None have the DNA for commercializing complex hardware. The innovation and growth on this end will come from new companies like @Apptronik like you see in this video




























