Your Robots Are Still Blind — And It's Costing You

The question isn't whether vision-based automation can work; it's whether manufacturers can afford to keep competing without it.

An Inbolt and Fanuc integration allows robots to operate with real-time 3D vision and adaptive trajectory correction, even with part variation or imperfect environments.
An Inbolt and Fanuc integration allows robots to operate with real-time 3D vision and adaptive trajectory correction, even with part variation or imperfect environments.
Inbolt

For years, the phrase "computer vision" in manufacturing drew more eyerolls than excitement. In theory, it meant robots would be able to see and respond like humans. In practice, early systems struggled with basic factory realities — shifting light, reflective parts, and the fact that no two pallets are ever exactly the same. It's not that manufacturers didn't want it to work, but that early 2D vision systems couldn't keep up with the chaos of a production line.

Early 2D vision systems quickly showed their limits. They could only capture two dimensions, making it very difficult to detect some rotations or misalignments in parts, precisely the kind of detail factories depend on. On top of that, they were highly sensitive to changes in lighting and reflections, which meant performance could collapse under real factory conditions. And even when they did work, setting them up required advanced computer vision coding skills from the end user. This led to expensive tech investments that often failed when precision was what mattered the most.

The old reality has undergone a dramatic transformation in the past decade. The pairing of 3D vision with artificial intelligence has advanced computer vision from being seen as a risky experiment into one of the most important catalysts of autonomous manufacturing.

The 3D Vision Turning Point

The evolution of 2D to 3D vision surpassed expectations of what could be achieved. Robots could now perceive depth and spatial awareness, giving them the ability to perform tasks in ways that had once been impossible.  

While early 3D systems focused on structured light or stereo vision, they only performed well within highly controlled environments; performance in a traditional, real-world factory setting could still be problematic. However, once AI and modern vision systems were combined, their capabilities advanced to new levels. Robots could not only see in three dimensions but also interpret and adapt to what they're seeing. The ability to adapt to changes in their environment and make necessary adjustments as needed, just as a person would, was a monumental shift.  

Breaking Free from Rigid Automation

Before robots could "see," factories had to be built around them. Every change meant new tooling, exact fixtures, and complicated reprogramming—because industrial robots were designed for precision, not perception. Even a small misalignment—a part slightly off-center on a conveyor—could lead to errors, stoppages, and expensive rework.

The introduction of vision-guided robotics is a real game-changer. The enhanced, near-real-time perception that robots now possess means that, for the first time, they can adapt to the factory. There is no longer any need to replace fixtures or tear out equipment, as 3D vision systems can be retrained in minutes without any physical changes. Because the vision system travels with the robot, it responds to whatever's happening in its environment—changing light, shifting parts, worn racks—without halting production.

On the Scene at Stellantis Detroit

Across the Atlantic, the Stellantis Body Side Outer Aperture line in Detroit had its own headache: aging racks. Over time, they'd lost their perfect fit, and parts no longer nested quite right. This resulted in frequent breakdowns and production delays.

Instead of replacing the racks, which would have been expensive and disruptive, the team retrofitted a FANUC cell with a vision-guided system. The results were swift and included pick errors disappearing, downtime dropping by 97%, and complete ROI in just three months. 

Vision as the Foundation for "Dark Factories"

The term "dark factory" conjures images of vast, silent facilities running in total darkness, fully automated and requiring no human presence. This might sound like something you see in a science fiction TV show, but there are instances in which manufacturers are already moving in this direction. 

To make it work, though, autonomy isn't just about robots—it's about perception. The ability for robots to perceive and respond to unexpected changes is crucial in ensuring that even the most advanced automated environments reach their full potential.

For example, consider a misaligned rack. If no humans are present to fix it, production will stop—unless the robot can detect the issue and adapt in real-time. Computer vision provides a safeguard that not only helps robots operate but also enables them to make the small and necessary corrections that keep production running smoothly and avoid burdensome stops and starts. This is what makes the difference between lights-out automation that works for a few hours and a truly autonomous factory that can run 24/7.

Cost and Complexity Myths Busted

Many manufacturers believe that vision systems are too expensive and complex to invest in. While this was true at one point, it's not anymore. 

Today, 3D vision can be installed on existing robot cells from brands like ABB and KUKA. Engineers don't even need to have special coding skills. Instead, they can easily train the AI on a part's CAD file or 3D model in 30 minutes or less.

Additionally, the cost savings are twofold: capital expenditure drops because expensive fixtures aren't needed, and operational costs fall due to reduced downtime and fewer rejected parts—up to 80% fewer in some cases. Many of the organizations that have taken the leap are finding that they experience a return on their investment within months, whereas previously it would've taken years.

The Competitive Imperative

The 2020 COVID pandemic disrupted everything from life to work to manufacturing. Ever since, the manufacturing landscape has experienced a rollercoaster of ups and downs. Additionally, supply chains remain unpredictable, product cycles have become shorter, and it has become increasingly difficult to estimate demand fluctuations. In this environment, agility isn't just nice to have—it's critical for survival.

Albane Dersy is the co-founder and COO of Inbolt, a company combining AI, vision, and robotics to turn factories into adaptive systems that run smarter and faster.Albane Dersy is the co-founder and COO of Inbolt, a company combining AI, vision, and robotics to turn factories into adaptive systems that run smarter and faster.InboltComputer vision can be a valuable tool for manufacturers looking to increase agility in this environment. It can help production lines adapt more quickly to new products and conditions without requiring significant investment of time and resources. In the past, a massive retooling or programming overhaul would've been required. With computer vision, that's no longer the case.

The reality is that manufacturers investing in vision are building resilience into their operations. They're the ones who will be able to respond when the next disruption hits. For manufacturers still on the fence, the question isn't whether vision-based automation can work; it's whether they can afford to keep competing without it.

More in Operations