Why 3D Inspections Need True 3D Vision


Real-world manufacturing applications involve 3D objects, so it’s no surprise that solutions developed in true 3D space, versus 2.5D height map projections of 3D objects, take less time to develop and provide better results.

Since Harpers Weekly published the first optical illusion in 1892 – a sketch that could be seen as either a rabbit or a duck – millions of people have challenged their minds by trying to solve optical illusions such as the Ames room or the Zöllner Illusion. But while these brain teasers can be fun ways to pass the time, for machine vision designers, trying to solve 3D problems using only 2.5D technology can crush productivity. 

The term 2.5D in computer vision derives from the work of David Marr, a neuroscientist at MIT. Marr’s book, Vision: A Computational Investigation into Human Representation and Processing of Visual Information, published in 1982, presents three representations of gradually increasing sophistication for deriving shape information from images:

  • Primal sketch
  • 2.5D sketch: Makes explicit the orientation and rough depth of visible surfaces and contours of discontinuities in these quantities in an imager-centered coordinate frame.
  • 3D model: Describes shapes and their spatial organization in an object-centered coordinate frame.

Marr’s ideas led to the Marr-Poggio-Grimson approach to computational stereo vision.

Point Clouds versus Height Maps

Whether it is measuring the alignment of two adjacent car panels, measuring the position of electronic components on a printed circuit board, or identifying, locating, and verifying food in packaging, the list of manufacturing applications that can benefit from a simple-to-use 3D vision system is virtually limitless. However, despite being pre-calibrated at the factory, most 3D vision systems are neither easy-to-use nor extremely accurate. Suppliers of 3D solutions — much like the makers of color cameras before them — have had to make trade-offs to reduce system complexity and cost, usually at the expense of precision and speed. 

For example, most 3D-vision suppliers generate 2.5D height maps, rather than traditional 2D imaging or true 3D point clouds. Unlike a 3D point cloud, which produces a complete 3D model that can be turned, flipped, and rotated along each dimension (height, width, depth) and helps the operator to intuitively connect the object in the real world to the features and defects on the display, 2.5D height maps translate height information into color.

2D height map and 3D point cloud comparison

2.5D height map (left) and 3D point cloud (right)

As the example above shows, displaying height information by color can quickly become confusing for the operator. (Is blue high or low?) Finding small defects on a real-world object by using false-color computer displays can lead to slower operation, increased waste, or both. Additionally, leveraging traditional 2D image processing algorithms, such as edge detection and linear measurements, is problematic when applying these algorithms to 3D objects because they were not designed to measure 3D objects.

2D and 3D point cloud object scan
 
Consider the true 3D point cloud scan of a half-sphere, cylinder, triangle, and two cubes shown above. Using true 3D point clouds, the system is easily able to locate the top center edge of the triangle. If the algorithm were trying to find the 3D edge using only color gradients on a height map, identifying a clean edge at the top of the triangle would be challenging. And accuracy suffers further when trying to develop a volume measurement using an estimated center edge height.

Suscribir

ACCESO A ASISTENCIA Y CAPACITACIÓN PARA PRODUCTOS Y MÁS

Únase a MyCognex