Easy-to-use Machine Learning Solves a Range of Classification Applications
In every industry, line and automation engineers are aware of the benefits of using sophisticated machine vision to replace human inspectors, increase inspection speeds, and improve accuracy, while managing changes in products and materials. They have also heard quite a bit about machine learning, deep learning, AI, and other concepts that seem somewhat remote from the day-to-day responsibilities of their jobs.
While fast and accurate, traditional machine vision requires a lot of programming, and the ability to manipulate image tools to do things like use the output of one tool as the input to another to get the final desired output, in a process called chaining.
The In-Sight 2800 vision system with “edge learning” technology eliminates such cumbersome processes and brings the power of both deep learning and sophisticated machine vision to the factory floor, all without requiring knowledge of either deep learning or machine vision tools.
The power of deep learning without the complexity
Deep learning at the edge, more informally referred to as “edge learning” is a subset of machine learning in which processing takes place directly on-device using a set of pre-trained algorithms. Edge learning is useful in a wide range of industrial applications that currently either use traditional vision cameras, or still rely on human inspection. The In-Sight 2800 deploys this technology to identify and classify subtle yet significant defects that have previously proven to be beyond the capabilities of even sophisticated traditional machine vision tools.
Categorizing in this way has an additional, longer-term benefit in process improvement. Since edge learning tools can be trained to classify product defects into any number of categories, it can provide information on which error types are becoming more common and may indicate that a machine on the line is slowly drifting out of spec. That machine can then be adjusted or rotated out long before it starts creating serious errors or stops working altogether.
Injection-molded electrical connectors are everywhere in modern motor vehicles, carrying power and signals to a wide range of components. Connectors simplify wiring and make it easier to separate and remove components during maintenance and repairs.
Connectors must be snapped together completely and accurately to guarantee a long-term electrical connection. The connection must also be confirmed before the part or vehicle moves on to the next step in production. This process is easier said than done. Connectors have a wide range of clips, snaps, and other ways of joining. Plus, they are composed of black or dark plastic, which makes it difficult to see details, and they often present to the inspection camera at various angles.
The In-Sight 2800 vision system with edge learning technology can be trained on small sets of labeled images of both good and bad connections, after which it will quickly classify connectors as either OK or NG. If a new connector design is introduced, it is easy to retrain the edge learning tools with a few examples of the new design, right on the line.
Many printed circuit boards (PCBs) include LED indicator lights to show status. In one application example, it might be necessary to identify which indicators show a power on (PWR) condition, a transmit (TX) condition, or an off condition. Given the dimness of the LEDs, their close placement, and the confusing visual environment against which they show up, traditional machine vision sometimes has trouble distinguishing between indicator states.
With traditional machine vision, the typical way to make these determinations is with a pixel count tool. This requires setting thresholds for brightness at specific locations for each condition, an involved process that requires advanced machine vision programming experience.
Edge learning tools, like those embedded in the In-Sight 2800, can be trained on small sets of labeled images of the OFF, PWR, and TX conditions, or directly through the camera if desired. After this brief training, the tools will reliably classify and sort the PCBs into the three different states.
In some medical and pharmaceutical applications, glass vials are automatically filled with medication to a predetermined level. Before they are capped, the fill level must be confirmed to be within proper tolerances. The transparent and reflective nature of both the glass vial and its contents make it difficult for traditional machine vision to consistently detect the level.
Edge learning can discern the key parts of the image that indicate the fill level, ignoring the confusion caused by reflections, refraction, or other distracting variations in the image. Fills that are too high or too low are rejected, while only those within tolerance are passed.
On the production line, bottles of soft drinks and juices are filled and sealed with a screw cap or closure. If the rotary capper misthreads the cap, or it gets damaged during capping, this can leave a gap that may result in contamination or leakage.
Bottle filling and capping lines move at high speeds. Correctly sealed caps are easy to confirm, but there are many subtle ways that a cap may be inadequately screwed on. Both the speed and the wide range of ways in which a cap may be almost, but not quite sealed, make this a challenging application for traditional machine vision.
The In-Sight 2800’s edge learning tools can be shown a set of images labeled as good, and a set of images that show caps with slight gaps that are almost imperceptible to the human eye. The tools can then categorize fully sealed caps as OK and all other caps as NG, at line speeds. Using this technology significantly decreases the rate of passed defects, while also being both inexpensive and easy to use.
An easy solution to difficult factory automation problems
The In-Sight 2800 vision system with edge learning is designed with challenging factory automation problems in mind. Capable and easy to use, it quickly becomes an essential tool on any line.