How Deep Learning Differs from Traditional Machine Vision
At a fundamental level, machine vision systems rely on digital sensors protected inside industrial cameras with specialized optics to acquire images. Those images are then fed to a PC so specialized software can process, analyze, and measure various characteristics for decision making.
These systems, however, are very rigid and narrow in their application within a factory automation environment. Traditional machine vision systems perform reliably with consistent, well-manufactured parts. They operate via step-by-step filtering and rules-based algorithms that are more cost-effective than human inspection.
On a production line, a rules-based machine vision system can inspect hundreds, or even thousands, of parts per minute. But the output of that visual data is still based on a programmatic, rules-based approach to solving inspection problems, which makes machine vision good for:
- Guidance: Locate the position and orientation of a part, compare it to a specified tolerance, and ensure it’s at the correct angle to verify proper assembly. Can be used for locating key features on a part for other machine vision tools.
- Identification: Read barcodes (1D), data matrix codes (2D), direct part marks (DPM), and characters printed on parts, labels, and packages. Also identify items based on color, shape, or size.
- Gauging: Calculate the distances between two or more points or geometrical locations on an object and determines whether these measurements meet specifications.
- Inspection: Find flaws or other irregularities in products such as labels correctly adhered on or the presence of safety seals, caps, etc.
Deep learning uses an example-based approach instead of a rule-based approach to solve for certain factory automation challenges. By leveraging neural networks to teach a computer what a good image is based on a set of labeled examples, deep learning will be able to analyze defects, locate and classify objects, and read printed markings, for example.
In the real world, that means a company might be trying to inspect electronic device screens looking for scratches, chips, or other defects. Those defects will all be different in size, scope, location, or across screens with different backgrounds. With deep learning it’s possible to tell the difference between a good part and a defective one, considering those expected variations. Plus, training the network on a new target, like a different kind of screen, is as easy as taking a new set of reference pictures.
That makes deep learning particularly adept to:
- Solve vision applications too difficult to program with rules-based algorithms
- Handle confusing backgrounds and variations in part appearance
- Maintain applications and re-train with new image data on the factory floor
- Adapt to new examples without re-programming core networks
Deep learning is now being used in applications where inspection has typically been done manually, like final assembly check. These tasks were once considered too difficult to automate. With a tool like deep learning, those tasks can now be done with a vision system more consistently, more reliably, and faster right in the production line.
Humans are good at categorizing different but like things. We can understand in mere seconds the variance amongst a certain set of objects. In this sense, deep learning tools combine the benefits of human’s evolutionary intelligence with the consistency, repeatability and scalability of traditional rules-based machine vision.
Understanding those differences will be key for any company embarking on a factory automation journey. Because those differences are key to determining when it makes sense to leverage one or the other in a factory automation application.
While traditional machine vision systems perform reliably with consistent, well-manufactured parts, the algorithms become challenging to program as exceptions and defect libraries grow. In other words, at a certain point some applications needed for factory automation will not be best served by relying on rules-based machine vision.
Complex surface textures and variations in part appearance introduce serious inspection challenges. Rules based machine vision systems struggle to appreciate variability and deviation between very visually similar parts. “Functional” anomalies, which affect a part’s utility, are almost always cause for rejection, while cosmetic anomalies may not be, depending upon the manufacturer’s needs and preference. Most problematically, these defects are difficult for a traditional machine vision system to distinguish between.
Certain traditional machine vision inspections, such as defect detection, are notoriously difficult to program due to multiple variables that can be hard for a machine to isolate such as: lighting, changes in color, curvature, or field of view.
This is not a problem, in and of itself, but it is problematic when companies attempt to solve applications with machine vision when there are more appropriate tools available to them. While traditional machine vision systems perform reliably with consistent, well-manufactured parts, the applications become challenging to program as exceptions and defect libraries grow. In other words, at a certain point some applications needed for factory automation will not be best served by relying on rule-based machine vision.
Understanding those differences will be vital for any company embarking on a factory automation journey. Because those differences are key to determining when it makes sense to leverage one or the other in a factory automation application.
While deep learning is transforming factory automation as we know it, it’s still just another tool that operators can employ to get the job done. Traditional rule-based machine vision is an effective tool for specific job types. And for those complex situations that need human-like vision with the speed and reliability of a computer, deep learning will prove to be a truly game-changing option.
To learn more about deep learning technologies for manufacturing download our free eBook, Deep Learning vs Machine Vision.