Understanding Red High Detail Mode for Defect Segmentation
Cognex VisionPro Deep Learning software includes a new setting for machine vision inspections that require extremely high precision, called Red High Detail Mode.
This advanced detection mode addresses highly specific use cases that require additional scrutiny in segmenting defects. No matter the industry or type of defect, Red High Detail Mode delivers precise segmentation, faster processing times, and higher quality products.
Deep Learning Tool Overview
Understanding Red High Detail Mode starts with a review of the four primary tools in VisionPro Deep Learning. These tools use deep neural networks to recognize patterns, components, and anomalies in images captured by Cognex machine vision cameras:
- Red Analyze: Detects anomalies and defects in images. The Red High Detail Mode tool is an architecture setting within this tool.
- Green Classify: Classifies an image or parts of an image into any number of classes. One common example is classifying defects by type (e.g. blemishes, cracks, and scratches).
- Blue Locate: Locates parts or components in an image.
- Blue Read: Performs sophisticated optical character recognition in images.
VisionPro Deep Learning developers can use any or all of these tools together. This is especially true for the Red Analyze and Green Classify tools. For instance, the Red Analyze Tool can identify an anomaly and the Green Classify Tool can determine the defect type.
High detail mode has been a part of the Green Classify tool for some time, so this blog will focus on the new version available in the Red Analyze tool.
How the Red Analyze Tool Works
When a Cognex camera takes an image to inspect a part in production, VisionPro Deep Learning must determine whether the image passes or fails the inspection.
The Red Analyze tool enables these inspections by scanning for features, objects, or components within an image. VisionPro Deep Learning uses a neural network to give these images a passing or failing grade, based on a collection of preset training images.
The process works as follows: The machine vision application developer feeds a series of images into the neural network. Typically, half of these are training images, and the other half are validation images. The neural network compares the training images to the “ground truth” in the validation images. During tool training, the neural network learns to decipher the difference between passing and failing images.
The Red Analyze tool works in two modes:
- Unsupervised: Uses defect-free images to train the neural network. Any details in the image that deviate from the definition of "good" are flagged as anomalies.
- Supervised: Requires developers to identify specific segments within an image to train the neural network. The neural network scans the image looking for these specific defects.
Supervised mode uses two sub-modes:
- Focused, which offers high-performance and quick training time.
- High Detail, which offers best-in-class accuracy given its exhaustive algorithm
Thus, the Red High Detail segmentation is a feature of Supervised mode in the Red Analyze Tool.
Deciding When to Use High Detail Segmentation
Focused mode segmentation in VisionPro Deep Learning is extremely accurate and extremely agile. New images can be added to the neural network and start producing results in a few minutes. Focused mode is well-designed for simpler applications, but for more challenging jobs, high detail mode is a better fit.
High detail segmentation requires a more complex neural network architecture. Naturally, this requires a trade-off between time and precision. With the Red High Detail Mode tool, it might require a couple of hours to train the network. So, application developers need to be strategic about addressing the extra processing time.
Saving Time with the Red High Detail Mode
Labeling images and segments is typically one of the most time-intensive parts of developing a machine vision application in VisionPro Deep Learning. Developers may need to label multiple segments within dozens of images. Backgrounds may need to be masked out of an image, and objects may need to be imaged from several angles to capture everything the neural network needs to see.
Fortunately, the labeling task only needs to be done once. The labels can simply be copied when evaluating different tools for their application, like Focused Mode or High Detail Mode. Without the need to re-label images, the developer saves time and can deploy their application more quickly.
Moreover, developers can mix and match Focused and High Detail mode within an application — reserving high detail segmentation only for the instances where it does the most good. Red High Detail Mode often works well in a two-tiered inspection model, where production parts that fail the high-detail test get sent to human inspectors who make the final pass-fail judgment.
Detecting Subtle Defects and Achieving Precise, Predictive Analysis
Machine vision developers are accustomed to building applications with high accuracy and tight tolerance. Red High Detail Mode helps these developers achieve the level of precision required for the most demanding applications.
The Red High Detail Mode tool is a good fit for applications that call for finding challenging defects and accurately predicting the shape/size of those defects. This pixel-level precision is important in major industries like consumer electronics, semiconductor, and automotive. For instance, a semiconductor manufacturer might need to detect tiny defects that could cause a microprocessor to overheat. Or a food processing plant might need to scan for the earliest signs of mold or spoilage in refrigerated products.
Deep learning algorithms, like those that power Red High Detail Mode, extrapolate meaning from immense quantities of pixel patterns. Moreover, parts in production often have subtle differences that do not affect quality, performance, or durability. Machine learning applications can be tuned to account for these differences.
However, a network should not be trained with so much precision that it rejects everything it inspects. Rather, the goal is for the neural network to make nuanced judgments from a large data set with many subtle variations — just as people do.
Solving Complex Diagnostic Applications with VisionPro Deep Learning
With VisionPro Deep Learning software, developers are able to leverage a wide tool set – including Red Analyze, Green Classify, Blue Locate, Blue Read tools – to meet even the most stringent application requirements. Of these tools, Red High Detail Mode, a feature of the Red Analyze Tool, offers pixel-level defect segmentation to detect and measure difficult defects, like blemishes, cracks, scratches, and other types of imperfections on manufactured products. The tool accurately learns the appearance of defects and predicts them in untrained images with pixel-level precision. Along with its counterparts, Red High Detail Mode brings the power of deep inspection to manufacturers across all industries, enabling their inspection systems to make human-like decisions with the speed of a machine. The end result? Faster processing times and higher quality.