Quality control has become synonymous with manufacturing. Nowadays, companies strive to achieve maximum production capacities while adhering to the highest quality standards.
Conventionally, manufacturers employ a large number of industrial workers that manually inspect each item coming out of the assembly line. Such a method has multiple obvious drawbacks. Despite in-depth guides and sufficient training, humans still judge certain situations subjectively and make mistakes. Moreover, the production capacity will always be capped by the number of inspection specialists, which hinders scalability.
This is why computer vision software has made its way into the manufacturing plants. With automated visual inspection (AVI) systems in place, manufacturers can achieve maximum production capacity, while ensuring regulatory compliance.
The first step of moving from manual to automated inspection was taken with automated optical inspection (AOI) systems.
Although AOI and AVI systems are very similar, they have some key differences. AOI systems are equipped with high-resolution optical cameras that capture visual information. This data is then compared to the template (the image of a particular non-defective item in a perfect condition) to identify defects. The core technology behind AOI is machine vision.
Although AOI has entirely transformed quality control in many manufacturing facilities, it has certain drawbacks:
In a nutshell, an AOI application can be considered a rule-based system. It matches an inspected object with a perfect one and answers the question: “Do the captured visuals match the original?” If the answer is ‘no’, then the given product is flawed.
Of course, with statistical pattern matching in place, developers can train these models to classify defects. However, if there is even a minor change in the environment causing a defect to look differently, machine vision will have a hard time classifying it. Unsurprisingly, if there is a previously unseen defect type, there is no chance AOI can classify it.
By augmenting existing AOI systems with AI, the aforementioned challenges can be solved.
Under automated visual inspection we consider a visual inspection method that uses combination of computer vision and deep learning to make decisions. In the simplest terms possible, AVI systems are set to simulate the capabilities of a perfectly attentive, non-error-prone, constantly learning and evolving inspection specialist.
Unlike machine vision-based AOI applications that need to be reconfigured each time a new variable is added, computer vision-based AVI systems can source all the relevant information from a large dataset and autonomously classify a defect.
Tolerance to variation and environmental anomalies is one of the most important differences between AOI and AVI. Computer vision and deep learning allow automated visual inspection systems to make decisions by understanding the contents of a captured image rather than by comparing it to the reference. This significantly expands the range of defects that an AVI system can inspect, increasing accuracy and reliability.
In a nutshell, automated visual inspection systems for defect identification are applicable in the following industries where adaptability, scalability, and precision are paramount.
Unsurprisingly, when it comes to quality control, aerospace is one of the most regulated industries in the world. However, it’s not just a matter of compliance but astronomical maintenance costs and human lives at stake. With various automated visual inspection systems in place, many of the aircraft inspection and maintenance processes can be optimized, ensuring safety and increasing efficiency.
For example, AutoInspect is a robot-based optical system developed by 3D.aero, which uses an AI-powered inspection system and industrial robotics for smart defect classification. Besides increased accuracy and reliability, 3D.aero reports that a full aero-engine combustion chamber can be examined in less than four hours, which is 80% faster than manual inspection. Interestingly enough, 3D.aero doesn’t let its AI algorithm automatically improve itself as in such a high-stakes industry as aerospace safety is of utmost importance. Currently, new training data can be added only after approval from a human expert.
However, checking structural integrity is required even after an aircraft goes out of the manufacturing facility. Under the Corrosion Prevention and Control Program (CPCP), airline companies must ensure that aircrafts are free from functional defects and perfectly clean before every take-off. Contaminating substances such as oil and grease can be detrimental to aircraft skin, causing corrosion and degradation of components.
In the majority of cases, aircraft surface inspection is still done by humans, which has proven to be ineffective, time-consuming, and unsafe. While detecting surface stains in controlled environments is an easy task for modern computer vision applications, it’s rather unfeasible to do so when an aircraft is fully assembled.
This is why a group of researchers from Singapore, Mexico, Pakistan, and India has developed a reconfigurable climbing robot called Kiropter, which uses an enhanced deep learning algorithm. The robot is flexible enough to move around complex areas of an aircraft and can effectively detect unwanted surface defects and stains. Researchers report a 96% detection accuracy on average, depending on defect type.
Kiropter is yet to be mass-produced, but such initiatives prove that with innovative hardware design, automated visual inspection systems can be applied even in uncontrolled environments.
In such a large-scale industry as automotive, even slightest improvements in efficiency can result in a significant competitive advantage.
For example, in 2020 Volvo Cars have started using the computer vision-based Atlas inspection system developed by UVeye. At the end of the assembly line, each vehicle is inspected by more than 20 computer vision-powered cameras installed on an aluminum tunnel. Each camera takes hundreds of images per second, allowing the AI algorithm to assess the surface quality in detail.
Given that captured images take up to 10 gigabytes of storage per car, the UVeye system sends this data to a cloud business intelligence system. Unsurprisingly, the algorithm is also able to evolve based on historical data, which helps with identifying anomaly trends. David Oren, Chief Strategy Officer at UVeye, claims that the system is more efficient and accurate than conventional manual inspection methods, detecting from 10% to 40% more defects including scratches, dents, and component alignment anomalies. The system can detect even the tiniest defects measuring 0.2 millimeters in size.
Despite the growing interest in interior inspection among vehicle manufacturers all over the world, Oren admits that this is currently not possible with this technology.
When it comes to automation in healthcare with the help of computer vision, it’s mostly driven by regulations.
For example, both in the US and Europe, respective regulatory institutions require the use of a Unique Device Identifier (UDI) to track medical devices through the supply chain. UDI marks contain all the critical information including serial number and device expiration date. In many cases, conventional machine vision-based optical character recognition technology is powerful enough to ensure that this data is correct. However, some medical equipment can be marked with chemically processed DPM text, which can only be deciphered by deep learning-enabled automated visual inspection systems.
Regardless of the regulatory burden, the quality of medical device production can be a matter of life and death. Sterile, contaminant-free packaging is fundamental to medical device safety. With the great variance in packaging issues including underseals, foreign material presence, and punctures, complex automated visual inspection solutions with deep learning at their core are perfect at ensuring package sterility and integrity.
Pharmaceutical manufacturers can also implement AVI systems for pill inspection. Before being packaged, pills are always manually inspected by human experts to ensure that that the product is free from surface defects and has correct labeling, color, and shape. Machine vision-based automated inspection systems have proven to be easily tricked by pills’ reflective surfaces, which often make them look damaged. This is why the industry is turning to complex AVI systems that can accurately identify and classify defects that are not present in initial training sets.
In general, an AVI system consists of sensing devices and processing software. While it may seem that implementation is pretty straightforward, there are a few important factors that manufacturers need to consider.
The most crucial prerequisite for automated visual inspection success is data. It’s critical for the underlying AI algorithm to learn as many examples of defect types as possible. Training images also need to be captured in real-life lighting conditions. Moreover, images need to be properly labeled by a human expert, which means that AVI implementation starts long before hardware installation and software configuration. In addition, gathering relevant data should be a continuous and standardized process, as every new product or product version implies new possible defect types.
Next, it’s critical to decide if machine vision alone is sufficient or a deep learning-based model is required. As we’ve discussed earlier, machine vision-driven inspection systems are perfect for controlled productions with minimum variability, such as PCB assembly. In fact, AOI has become so synonymous with the PCB industry, that some vendors are offering prebuilt solutions for manufacturers.
On the other hand, especially if a manufacturer produces custom products, computer vision-based AVI would most likely be a better choice. However, no such decisions are straightforward. To decide which type of inspection system would be optimal, lighting conditions, defect classification complexity, object size, and other factors need to be thoroughly considered. Each manufacturing plant has unique requirements, which should be examined by exactly those specialists who will take care of an AVI implementation.
Then, it’s important to assess both monetary and indirect ROI. Quite often, deep-learning-based solutions may appear too expensive at first sight. However, they can also enable continuous improvement and provide data analytics capabilities, which can greatly contribute to the overall digital transformation success.
Apart from that, like with most other AI initiatives, you need to start small. In a nutshell, opt for an inspection issue that can’t be solved with a rule-based machine vision system and is not resource-intensive. Typically, in most manufacturing plants, it’s end-of-the-line inspection or in-line assembly verification of a particular component.
The implementation of automated visual inspection systems in manufacturing is a no-brainer in the majority of cases. Decision-making mostly comes down to the technology selection and the choice of the assembly-line stage.
Humans will always be inherently worse at performing such routine, daunting, error-prone and uninspiring tasks as quality inspection. Machines, on the other hand, are more than capable of 24/7 inspection, while achieving the speed and accuracy impossible for humans. In many ways, AI-powered quality control facilitates healthy competition between manufacturing organizations and drives the economy further by creating more products without compromising their quality.