How to Set Up Your Vision Application in Four Easy Steps
A machine vision application has a simple goal: identifying an object on a production line and figuring out what to do with it. Making machine vision work, however, is anything but simple: a specialized industrial-automation camera has to photograph objects in motion. After that, software has to determine whether the objects should proceed on the production line.
At Cognex, we’ve devoted decades of research to making it easier to create machine vision applications. You don’t need a degree in computer science or industrial engineering to run our machine vision software. It’s just a matter of taking the time to get familiar with the software and figuring out what’s most important.
Every machine vision application has three pillars:
- Devices. Digital cameras or sensors scan objects. Communication networks connect cameras and sensors to computers. Everything that matters in conventional photography — shutter speed, lens selection, lighting, aperture, etc. — plays a key role.
- Software. Applications are configured to inspect objects and determine whether they have passed inspection. Cognex’s software automates and simplifies this process.
- Objects to be inspected. Software can assess objects of almost any shape or size in two and three dimensions.
Cognex’s EasyBuilder interface streamlines the process of setting up a machine vision application by breaking it down into four steps: Start; Set Up Tools; Configure Results; Finish.
Let’s take a quick look at each of these steps. (We have in-depth tutorials for power users; this article is a concise overview for beginners.)
Part 1: Start
The Start interface has two sections:
Get Connected. Your first task is to connect the software to a camera or sensor. You might also want to scan an existing digital image. Either way, the point here is to find the camera or digital file and pull it into the EasyBuilder interface.
Set up image. You turn on the camera and bring a part for inspection into the field of view. You can view this live and adjust the camera’s lens to bring the image into focus. You may want to tweak the aperture setting to improve the depth of field if you’re photographing a three-dimensional object and inspecting components at various depths on the object.
You’re also setting up a trigger that puts the whole image inspection process in motion. You may also need to calibrate the image to ensure that the software measures every item in the image consistently. For example, 12 millimeters on the scanned object must equal 12 millimeters in the digital image. Once the software knows this, it can adjust all measurements automatically.
Part 2: Set Up Tools
Now it’s time to start creating tools to identify objects, inspect them and establish their pass/fail status.
Locate Part. You start by picking a location tool in EasyBuilder to identify components within the scanned object. You may want to scan for a pattern, like a brand name on a label, a serial number or components that sit next to each other. Other options include scanning for an edge or a specific shape. You’re creating a model and telling EasyBuilder to “scan everything within this model.”
Most scans will require multiple tools to identify all the relevant areas on the object that you want to test or reject.
Inspect Part. In the Inspect Part phase, you string together all of your tools, enabling the software to separate the good parts from the bad parts. You can create logic with if/then statements to give separate instructions if some tools pass and other tools fail. Combining all of the tools here establishes the parameters for sending passed objects through the production system and rerouting failed objects to another area for repair.
Part 3: Configure Results
When your application can issue pass/fail judgments, you can start defining what to with this information. You’re configuring the application to communicate pass/fail data to devices such as:
- PLCs (programmable logic controllers), which instruct parts of a production line to accept directions from a computer. For instance, your vision application would instruct a PLC to reroute failed parts away from the production line.
- HMIs (human machine interfaces), which allow archiving, data collection and analytics.
- Robots that align components and move parts to new locations.
These functions all flow from a hierarchy of tool properties in the EasyBuilder interface.
Part 4: Finish
Now it’s time to wrap up the design and get your application into production. Here, you can troubleshoot images and doublecheck that everything is working properly.
One of the more powerful features in EasyBuilder is the results queue, which allows the user to capture results based on specific conditions. This makes it easier to diagnose errors that crop up only occasionally. A filmstrip option visualizes all of the objects under inspection in real time.
The Value of a Point-and-Click Interface for Machine Vision
Cognex designed EasyBuilder to provide an intuitive point-and-click interface for crafting machine vision applications on standard computing devices without advanced training. That said, machine vision in a production environment has hundreds of variables. Just as professional photographers use an armada of cameras, lenses, lights, computers and software, machine vision application designers need a wealth of options to build the best application for their needs. We created EasyBuilder to satisfy these needs.
Every industrial automation project has unique parameters because of variables like ambient light and the dimensions of objects in production. Because mastering the interplay of these features is time-consuming, we have developed a series of tutorials to give EasyBuilder users the knowledge they need to excel at building machine vision applications.
Find out more:
A Quick Look at Color Lighting and Filters for Vision Sensors
Cognex Online and Classroom Training