How to Train Edge Learning

Edge learning Large brain

Training edge learning is similar to training a new employee on the line. What the user needs to know is not how machine vision or artificial intelligence (AI) works, but what problem they need to solve.
 
For instance, if the application is straightforward, such as classifying acceptable and unacceptable parts as OK/NG, the user needs to know which parts are acceptable and which are not. In this use case, edge learning is particularly effective at determining which variations in the part are significant, and which variations are purely cosmetic and do not affect functionality.
 
Edge learning is not limited to binary classification either. If parts need to be sorted into three or four(or even more) categories that can be deployed just as easily. Edge learning is also capable of analyzing multiple regions of interest (ROI) in the image. And, of course, both multiple ROIs and multiple categories can be handled together, making the technology both extremely capable yet easy to use.
 
Watch a step-by-step tutorial on how to train edge learning below and see how you can leverage this technology in your next factory automation deployment.

Transcript:

Hi, my name is Tyler Ducharme and I'm a technical marketing specialist working with the insight product team here at Cognex.Today, I'd like to walk through how to set up an In-Sight 2800 application with our brand new vidiel classifier tool.

In terms of my setup, I have my In-Sight 2800 smart camera equipped with an integrated light and lens connected to 24 volt power. With our new vidi EL classification tool, combined with the In-Sight 2800 smart camera, I'll show you how we've made vision simple. Okay, so this is the In-Sight vision suite software that we use to set up our In-Sight applications as you can see, I'm already connected to the In-Sight 2800 smart camera. 

First we will select our source in this case we're just going to use the camera itself. All we need to do is set up our lighting and focus with two simple clicks of a button.So all I have to do is click optimize lighting, to have a nice bright image and then click focus to get our image and focus. The first step with training theclassification tool is setting our region of interest that we want to inspect. Once I've configured the region, I just press Ok. And what the tool is going to do is actually show me the two default classes which are OK and NG Which means not good.The tool will automatically assign a label to OK, as the first image.So I actually have my OK part under under the camera now. One other thing I want to mention is that what if we have the part not in the image at all and there's no part. In this case, the beauty of our tool is that we can actually add another class and we can and we can call it no part. We'll bring that class in and we'll train it as we go. Let's start by collecting some images. This is our you know, good part. I can move it around the the image a bit, trigger the camera and this is again still an okay part. So all I have to do is press the OK button and you'll see that my number of images will increment here and I've actually trained that image. What you'll notice is that we have this green ring around a yellow circle, the yellow circle indicates which class is being predicted and the green ring is the level of confidence. In this case, just because I just trained this, the confidence score is at 100%.

Alright, let's bring in some bad parts with a broken seal, trigger the camera and we haven't trained any bad parts yet. So the tool is having some trouble predicting what class this belongs to in this case it has no confidence in any of the classes that we have so far. Once we start to train, you'll see that our confidence starts to shoot up to 88% and then lastly we'll do the no part which is pretty straight forward. Take an image of no part train and that's looking pretty good to me. I do wanna mention the model health metric here at the bottom. We use this model health metric to decide when our application is ready to be deployed. Rule of thumb over 80% and stabilizing as we add more images is a good rule of thumb to know when to deploy your app. Okay, I want to add a couple more images here, so I'm going to go into this edit classes window and I've actually collected a sample of images beforehand so I can actually import them into the tool, select the folder. Great. So we brought in a total of 10 images and now you can see that each of these all has a prediction with the green ring. So let's actually selectall of these. It's really, really simple to justlabel all of these at once. I have to do is click and drag into the class that we want to train them. These all seem to be okay. Again we can just select them alland click and drag them right into the class. Okay, so let's get back out of edit classes So as you can see now our model health is up at 99%. We've trained, you know, eight images for NG two images for no part and nine images for OK. I think that this application is ready to deploy.So let's actually go and test it. I'm going to put the device online. We have no part under the camera now. Let's place apart under you've got OK, can move it around the image and you can see that our confidence is very, very high. You can rotate the part looks great. Now let's put a let's put NG under there again we can rotate the part, move it around. That's great. Maybe add some more variation. Again, we're still properly classifying. Yeah, and I think that looks great.

The last thing I want to show you is actually creating a runtime HMI display for users who might be on, you know, the factory floor monitoring this this device. So let's go into the HMI step and luckily for us um a lot of the information \ that you know, a user might wantto want to know is already kind of shown at default. We have, the region of interest where we're actually inspecting the part, the predicted class on the image itself as well as in this gray box here, but let's say a user wants to know how much how much confidence the tool has in its prediction. All we have to do is go into this drop down, we have all these different properties of the tool you can find predicted class score, which is the percentage. We can just bring that right into our display, then we can go back online showing okay part. And now we have our predicted class confidence score as well. Here's NG. Great. And I think that's about it to set up the the HMI display.

To recap today's tutorial, first we connected to our device and insight vision suite. We then set up our image in two simple clicks. We brought in our vidi EL classifier tool and created three classes, OK, NG andno part. We took multiple images of each type of part and assigned a class to the proper images. Then once our model health was over 80% and stable, we then ran our job and ensured the parts were classified properly.

Lastly, we created a very simple HMI display with the predicted class exposed as well as the confidence percentage of the predicted class exposed as well. From here, we could further set up our HMI with more information from other tools or more data to help line users track device performance. We can also set up communications to interface with factory PLC s or other devices on the
factory floor.

Thanks again for watching.

 

More Posts on

GET ACCESS TO SUPPORT & TRAINING FOR PRODUCTS & MORE

Join MyCognex