• Cognex Vertrieb: +49-721-958-8052

  • Kontakt

Cognex Blogs

Starting a Deep Learning Project in Manufacturing – Part 3: Optimization

blue brain with network connections on arrow background-large

Once a company has identified goals and obtained data and ground truth for its deep learning project, the next step involves optimization of the vision system and image database.

During this phase, the human inspector and vision system classify production parts as good or bad. Then an internal expert reviews any unclear parts and correctly labels them. The corrected ambiguous parts plus samples of good and bad parts go into the database to improve the deep learning model. Developers must be sure to add complicated images, such as those showing parts with unusual defects or lighting reflection, to the training set.

Augment Your Data

Deep learning-based software typically offers training sets and tools for optimization, but teams must take additional steps, such as cross-validating. Once enough data has been acquired, a team should use different sections of a dataset for training and validating against the rest of the set. No matter which sections of the dataset are chosen, teams should make sure that results are consistent, because if one section performs differently than others, there could be labeling problems, or some defect types could be underrepresented.

Another step involves isolating each key variable in the production process for composite optimization. So if multiple manufacturing lines exist, teams should take images from each line and use these for training while also ensuring that the data and results from each line are adequate. Additionally, if a company has different inspection methods, teams should optimize the different product versions and defect types, either through file-naming conventions or file folder structure.

You must also aim to feed your training data with as many defective part images as possible. For instance, if you have 500 images of good parts and 282 images of bad parts, use the images of the bad parts to train the system in what to look for, so that it will work more effectively during production.

Deal With Different Defect Types 

In deep learning, there are several ways to look at defects in images. A system might produce a measurement based on an entire image to determine whether a part is good or bad, or it might use a defect-based approach that identifies specific defects in parts. The latter is useful when an additional classification step is added for process control, but it may also require secondary processing to merge or separate defects. 

Alternatively, the deep learning system might isolate specific pixels on each defect in an image and provide measurements of the defect area. This method also usually requires secondary image processing, with the manipulation of defect regions to produce a perimeter or bounding box and also to measure the defect and classify it as good or bad. Individual applications require different approaches, so developers should understand defect metrics and how to optimize for them.

Understanding different types of defects and setting quality specifications also allows teams to perform end-to-end optimization to further improve the model. If a team specifies that a defect over 10 mm2 or any two defects over 5 mm2 represents a bad part, the deep learning system does not necessarily provide accurate pixel measurements. In these cases, a blob analysis tool can help obtain a more accurate measurement of the defects, allowing the team to use these images to refine the model. However, if a team plans to use a blob tool for additional optimization and analysis, the developer on the team should bias the deep learning system to report even borderline cases as defects, to be safe.

Deep learning defect detection compared to blob analysis tool

Deep learning segmentation tool (left) and blob analysis tool (right) being in used in combination to refine the defect detection region

Keep Up with Key Metrics

Key deep learning metrics must be kept in mind during this phase. Not everyone on the team needs to understand all aspects of deep learning, but the developer needs to understand all the key metrics and how to optimize them. Examples include the difference between overkill and underkill versus precision and recall, F1 scores, and area under curve (AUC) statistics.

Additionally, teams should look at cost functions, balancing the cost of scrap (overkill) against the cost of escapes (underkill), to determine the value of a solution. Developers should not concern themselves too much with cost functions when initially setting up a deep learning project but will want to drive the project toward high F1 scores over time.

In part 4, we will look at factory acceptance testing. 

ERHALTEN SIE ZUGANG ZU SUPPORT & TRAINING FÜR PRODUKTE & MEHR

Werden Sie Teil von MyCognex

SIE HABEN EINE FRAGE?

Cognex is weltweit vertreten, um Sie bei Ihren Vision- und Barcode-Anwendungen zu unterstützen.

Kontakt