Starting a Deep Learning Project in Manufacturing – Part 2: Collecting Data and Establishing Ground Truth
Absolute and Relative Data
Two types of data must be gathered during this phase: image (absolute) data and process (relative) data. Image data collected by the deep learning team helps optimize and train the neural network on defects and pass/fail determinations. Reliable image capture involves — among other things — identifying a camera with appropriate resolution and selecting and configuring a proper lighting setup.
Process data allows a company developing a deep learning-based system to perform advanced optimization. This may include data on the unit cost of escapes versus scrap, the frequency of pass versus fail, and the frequency of different defect types. The deep learning team must look at the performance of the deep learning system against ground truth, as well as the performance of an existing solution, such as manual inspection, against ground truth.
Maintaining a Continuous Process
All phases of a deep learning project must typically be done on a continual basis. This work includes gathering image and process data, training the model, and keeping the data labeling current.
Companies need workers who can consistently and reliability label defects in images, so the deep learning model trains on quality data. Keeping the training process continuous allows teams to streamline the collection and logging of accurate data.
To avoid statistical anomalies, teams must capture and track product variations, component changes, equipment drift, and tool wear. In conjunction, all image labeling must be consistent and unbiased, with independent measurements and clear definitions. When product specifications change, new products are added, or obsolete products are removed, teams must update image labels. Teams must also establish a process to continually capture information over time, so that when a problem occurs, the team can react and correct the issue.
A deep learning team should avoid using fake defects in training. Fake defects such as markings, cracks, or scratches on a part can be unrepresentative of real defects and can negatively impact the training process. For example, if someone on a team manually adds scratches on the middle of a part for testing, the system begins to look for defects only in that area.
Getting to Ground Truth
Teams have several options when it comes to getting ground truth, including the use of manual factory inspection results. In this method, the data is readily available and accepted. This may be the only option for parts that require special handling, such as tilting, for inspection. On the other hand, results may vary over time or between inspectors, and some stakeholders may have a vested interest in the system currently in place. This method should be used only as a starting point, as companies must invest in data collection and curation to determine a more accurate baseline.
Knapp tests can help companies grade human quality inspectors by running several known parts — good and bad — past the same group of inspectors multiple times. In Knapp testing, individual inspectors check control parts mixed in with production parts several times, and the results from each person are compiled to reach a consensus pass/fail result. While this method lets companies see which defect types are caught consistently and which inspectors perform the best, it is limited to small datasets. It also may produce unrepresentative results since defect appearance may be unrealistic — or artificial — and defect distribution is always unrealistic. Companies should assess individual inspectors for accuracy and repeatability and create initial labeled datasets for neural network training using images with realistic defects.
Lastly, a company must have at least one trusted expert with an intimate knowledge of the company’s quality standards to obtain ground truth. First, teams record images and inspection results during production with both manual and automated inspection. The expert then confirms if a pass/fail determination can be reliably made from the image and helps set an image quality standard for the labeling team, ensuring that only accurate data is fed into the deep learning model.
In this example, a Trusted Expert is used to establish ground truth in a spot welding inspection application.
Manual and automated visual inspection results can then be compared. If the results align, the team can assume that decisions are correct, and the images can be added to the dataset. If the results differ, the expert reviews them and decides what to do. The expert helps establish a reliable ground truth image database with images based on real-world samples under realistic conditions. Additionally, the expert helps create reliable performance statistics, including defect distribution and manual and automated inspection performance data, while also improving inspection processes. The expert also provides data that can be reused for future automation projects. Note that when parts must be manipulated or handled to find defects, this method will give poor results. Another drawback to this method is that it relies on a single decision-maker.
In part 3, we will look at the optimization phase.