Perkin Elmer Logo
Industry:
Life Sciences
    Location:
    Munich, Germany
      Customer Objectives:
      • Rapidly image centrifuged blood tubes in racks with limited space constraints
      • Convert captured images to actionable height information for blood testing aspiration
      • Identify and classify the plasma blood layer after separating blood samples
      • Address challenging industry conditions including staffing shortages and continually evolving technologies and processes
      Key Results:
      • Overcame major challenge of an automated blood fractionation test set – inspection of the buffy coat layer, a key component of RNA and DNA testing and diagnosing diseases
      • Automated identification and classification of buffy coat layer, despite the variability of plasma, tube size, tube placement, and cap type
      • Combined traditional programming with deep learning algorithms to correctly identify blood samples that fall outside normal conditions
      Cognex Solution:

      Automation technologies such as machine vision, robotics, and motion control have remade the industrial world, but what about the human condition?

      Today, machine vision both guides a robot during the performance of repetitive, dangerous tasks and verifies the quality of finished goods to protect workers and customers. But what if it is not an automobile, but someone’s aunt? Not a flat screen, but a father, mother, or sister? Medical lab technicians have never been under greater pressure due to staffing shortages, constantly changing technology and processes, and an already demanding job during a global pandemic. Until recently, automation technologies have struggled to adopt the complex skill sets necessary to help these technicians help heal thousands, even millions, of people around the world.

      Working with medical instrumentation experts at PerkinElmer, Cognex engineers demonstrated how deep learning technology, a subset of AI, combined with compact embedded industrial cameras can overcome one of the last remaining challenges for an automated blood fractionation test set, a key component of RNA and DNA testing and diagnosing diseases. The solution is helping doctors quickly diagnose a wide variety of conditions, including COVID-19 that has turned the modern world upside down.

      Blood Fractionation: Breaking Down the Problem

      Founded in 1937 and headquartered in Waltham, MA, global medical technology conglomerate, PerkinElmer brought the challenge to detect buffy coat in fractionated blood samples to Cognex five years ago via its acquisition of Chemagen. Chemagen was leading the way in the field of nucleic acid isolation for the purposes of large-scale pathogen detection in blood banking, autoimmune and tissue analysis, and disease detection.

      “There has been an exponential increase in interest around plasma-derived circulating-free DNA [cfDNA] as a potential disease biomarker,” says James Atwood, General Manager of Applied Genomics at PerkinElmer. “And the use of buffy coat as a source of blood-based genomic DNA [gDNA], has experienced rapid adoption due to the attractive economics.”

      At the core of the JANUS system is a multiwavelength color vision system that captures images of centrifuged blood samples. The biggest challenge for the PerkinElmer team was how to rapidly image centrifuged blood tubes and robustly convert those images to actionable height information for blood testing, Atwood says.

      Blood testing analyzers, like the JANUS system, depend on accurately prepared samples and test setups. In practice, clinicians first draw the blood samples. Tubes containing the samples are placed in a centrifuge to separate the blood into its three constituent parts: red blood cells, white blood cells (buffy coat), and plasma. While the red blood cells and white blood cells are fairly easy to differentiate, the plasma can vary by color considerably, from a light yellow to dark orange or red, depending on hemolysis, lipidity and other factors in the blood sample.

      PerkinElmer plasma samples

      “The variability of the buffy coat, blood plasma, tube size, placement, cap type, and other conditions requires a complex automated solution that can’t be solved robustly and repeatedly using only a traditional deterministic machine vision software approach,” explains Dr. Joerg Vandenhirtz, senior AI expert of the life sciences team at Cognex. “We could correctly identify roughly 80% of the blood sample layers using traditional methods, but to capture the remaining 20% of blood samples that fall outside normal conditions, combining traditional programming with deep learning algorithms proved to be the best approach to solve this application with extremely high accuracy and repeatability.”

      Combining Traditional Machine Vision and Deep Learning

      The job of the JANUS G3 system is ultimately to aspirate the buffy coat layer of a centrifuged blood sample for further RNA/DNA extraction. For this the system needs to know the exact position of the buffy coat in real world coordinates to then automatically align the pipette tips with the buffy coat layer and aspirate it. Blood separation and the presence or absence of labels and caps also are important factors in quality assessment, which is crucial for robust work flows in highly automated labs; obviously, while the system needs to be sure that the cap is removed for the pipetting process, a missing cap on a tube during tube transportation might contaminate a machine or nearby samples, halting the systems operation. All of these degrees of freedom can vary in appearance based on the specific tubes and caps used to hold the samples, how samples are loaded and oriented in the rack, and in other factors. Because there are so many judgment-based factors, this inspection typically falls to humans laboratory environments.

      “When it comes to reading labels and barcodes, traditional machine vision systems are the best approach,” notes Vandenhirtz. “But when it comes to measuring the sample levels and colors inside the tube and identifying caps that can be of any color or shape, deep learning algorithms can accommodate the biological variability without negatively affecting the accuracy of inspection results.”

      Cognex’s VisionPro Deep Learning software platform combines commercial-grade deterministic machine vision algorithms and functions with deep learning software tools that can run on embedded or traditional PCs, depending on the embedded application. Deep learning analyzes images that have been tagged as “good” or “bad” by quality experts. By analyzing dozens, hundreds, or even thousands of sample images, deep learning software “learns” what is good and bad much like a human child — by example — rather than based on rules set by the programmer.

      “The best rule to remember for deciding if you should explore a traditional machine vision or deep learning machine vision solution is that traditional machine vision solutions work best when the product’s appearance and potential defects are predictable,” explains Vandenhirtz. “Deep learning is better for evaluating images of objects with unpredictable features. In the case of the JANUS G3 Blood iQ system, the tubes, caps, labels, and constituent fluids can all be different in size, shape, color, position, and more. For this type of complex application, the combination of traditional machine vision and deep learning proved much more effective at measuring the features of the tube and the samples inside.”

      Analyzing centrifuged blood samples with deep learning

      Once the operator loads a rack of tube samples into the JANUS G3 Blood iQ, the analysis begins with a single row of tubes passing between two Cognex Advantage vision cameras. One Advantage 102 color and one Advantage 100 monochrome OEM cameras. Each Advantage vision camera includes the AE3 vision engine module onboard, allowing the ultracompact vision system to run Cognex’ In-Sight embedded image processing algorithms, while connected to a nearby embedded PC for running deep learning software for advanced image processing analysis.

      While one Advantage 100 uses edge detection and Cognex’ proprietary IDMax algorithm to read the barcode label that identifies each tube, the second Advantage 102 Color camera acquires two images of each tube under two different colored lights – first white, then blue.

      PerkinElmer plasma height

      The first “white light” image is passed from the Advantage 102 Color camera to the VisionPro Deep Learning software running on a host PC inside the JANUS G3 Blood iQ instrument uses one of its four basic toolsets – classify – to determine if the image is a white or blue image. White light provides a broadband light source for detecting the blood plasma height – regardless of plasma color – at the top of the sample, as well as the tube top and cap, which can be any color, while blue light highlights the buffy coat layer created by the white blood cells. Cognex’s In-Sight running on the Advantage 102 Color converts the deep learning layer findings into real world measurements. The JANUS G3 Blood iQ machine then uses the buffy coat layer depth to ensure that the pipette is in the right position for buffy coat aspiration for final analysis and diagnosis.

      Deep learning shortens product development

      Deep learning programs are created in two steps. The first step is training, where the program analyzes tagged images to learn what is good versus what is bad. The second step is inference or deploying the deep learning program to do a job. Because it analyzes images in detail when it learns new inspection routines, the training portion of deep learning software is more computationally intensive than traditional machine vision algorithms. For this reason, developing a deep learning software solution is typically done on a workstation, where computing resources are plentiful. Once the deep learning neural network is trained and performing to the designer’s requirements, the program can be run on a standard PC or embedded, edge-computing device.

      The deep learning software running inside JANUS G3 Blood iQ first determines whether the image is a white or blue light illumination. If white, the software locates the type of plasma layer in the tube, which will be concatenated with information on the buffy coat layer position and thickness and tube dimensions before being passed to the JANUS host.

      “Cognex VisionPro Deep Learning software is designed so that if you can operate Microsoft Office, then you can program a deep learning machine vision solution,” explains Vandenhirtz. “Our integrated software environment means that customers like PerkinElmer can build a deep learning solution using hundreds of images rather than thousands or ten thousands of images, which reduces the time-to-market burden on the OEM customer. And unlike other deep learning solutions, Cognex’s uses four basic tools: locate, analyze, classify, and read, which makes it easier to debug the solution as you develop it. With open source or end to end solutions, designers have no recourse but to add more images to train the neural network and hope that improves the software’s performance. This ‘Black Box’ character of open source end to end solutions is why developers in the medical space are still reluctant to use these new technologies. With VisionPro Deep Learning, designers can break complex problems down to smaller tasks that can be optimized individually and thus are much easier to understand and maintain.”

      Introduced January this year, the JANUS G3 Blood iQ system has been well received by the laboratory and clinical community. “Earlier this year, our team launched the JANUS G3 Blood iQ workstation for research use only,” says PerkinEmer’s Atwood. “The project arose out of a need to support intelligent pipetting of fractionated blood, as a front-end liquid handling platform for our industry-leading chemagen™ nucleic acid extraction technology. The Cognex Life Sciences OEM team stepped up to support us. We collaborated with Cognex from the beginning of the project to the successful launch, and their expertise around machine image-based deep learning and vision was invaluable.”

      FEATURED COGNEX PRODUCTS

      GET ACCESS TO SUPPORT & TRAINING FOR PRODUCTS & MORE

      Join MyCognex