Why We Must Stop Building Fake Warehouses
For years, we have treated robots like toddlers in a dark room. We build massive, expensive cardboard cities just to teach a mechanical arm how to grab a soda can. To fix this, we need to stop obsessing over the physical and start perfecting the digital. Antioch, a new startup in New York, believes the answer lies in simulation.
They want to give engineers a “cursor” for the real world.
By creating virtual spaces that feel exactly like reality, we can train machines to act before they ever touch a floor.
This vision for digital-first development is now attracting significant financial backing. On April 16, 2026, Antioch announced an $8.5 million seed round to solve the “sim-to-real” gap. This gap is the reason your vacuum cleaner still gets stuck on a rug. Computer models are often too clean and too perfect.
In the real world, floors are slippery, light flickers, and dust interferes with sensors.
Led by venture firm A* and Category Ventures, this funding values the company at $60 million.
The team includes experts from Meta Reality Labs and Google DeepMind.
This investment is directed at overcoming a critical bottleneck: the current scarcity of high-quality training scenarios. We are currently stuck in a data drought. To train a self-driving car, you usually have to drive millions of miles and hope something interesting happens.
That is a recipe for failure.
Instead, we should be using “world models” like the ones Waymo uses to test its vehicles.
Antioch wants to make this tech available to everyone, not just the giants.
If a small company can simulate a million crashes in an hour, they can build a safer robot by lunch.
The Secret Math Inside the Machine
To bypass the data drought, the technology must evolve beyond simple graphics to master the intricacies of physical interaction. Behind the scenes, Antioch is working on something called neural rendering. Most old simulations look like cartoons because they use simple polygons.
Antioch uses tech similar to 3D Gaussian Splatting to turn photos of real rooms into 3D spaces that a robot can understand.
The company is also focusing on “haptic simulation,” which tries to model exactly how it feels to touch different surfaces.
They want to make sure a robot knows the difference between a glass vase and a plastic cup before it ever picks one up.
What is coming next for Antioch
These advancements in neural rendering are paving the way for a more integrated approach to robotics. Expect to see Antioch’s tools integrated directly into the hardware design process. Instead of building a robot and then training it, engineers will use Antioch to test a thousand different robot shapes in a virtual wind tunnel.
By 2027, we might see “zero-shot” deployment, where a robot moves from a digital file to a factory floor and works perfectly on day one. This will collapse the time it takes to automate a warehouse from months to minutes.
The Argument for Keeping it Real
However, the transition from digital training to physical execution faces significant skepticism from those who value the unpredictability of the real world. Some critics argue that simulation will never be enough. They claim that the “long tail” of reality—the weird, one-in-a-billion events—cannot be predicted by code. If a cat jumps on a robot while it is holding a chainsaw, a simulation might not have a plan for that. These skeptics believe we are creating a generation of “fragile” AIs that look brilliant in a lab but crumble in a messy kitchen.
Relying too much on synthetic data could lead to “model collapse,” where AI starts learning from its own mistakes until it becomes useless.
The Debate Over Digital Labor and Surveillance
The debate over digital accuracy also brings to light the human costs associated with harvesting data for these simulations. Is it right to watch workers just to replace them? To make simulations realistic, companies are surveilling factory workers to see how they move. This raises massive questions about who owns the “movement data” of a human body.
- Fact: Research from the University of Oxford suggests that over 40% of jobs could be automated, but only if we have enough data to train the machines.
- Fact: The IEEE is already debating standards for “human-in-the-loop” data collection to prevent exploitation.
- Debate: If a robot learns to weld by watching a master welder, does the welder deserve a royalty for every hour that robot works?
We are essentially downloading human skill into a cloud.
It is a heist of the hands.
The Reality Check for Digital Dreamers
These ethical and technical questions ultimately converge on a set of broader, philosophical uncertainties regarding the future of AI. If we live in a simulation, who is the one clicking the mouse?
- If a robot learns to walk in a world with no gravity, is it still a robot?
(Answer: No, it is just a very expensive paperweight.)
- If we can simulate a human brain, do we have to pay it a salary?
(Answer: Only if it starts a union in the cloud.)
- What happens when the simulation is more realistic than the real world?
(Answer: We probably won’t notice until we try to walk through a digital wall.)
Further Reading for the Brave
- On the limits of physics: Nature Portfolio
- On the ethics of automation: The Brookings Institution
- On the future of synthetic data: MIT Technology Review

Sizzling Sands, Savage Roars, And Nuclear Storms [What If]
Savoring Speed: Blockchain & AI Unite