Making sure that there are no defects or values outside the normal range is vital, but when this quality control involves checking hundreds, if not thousands, of different parts, it can seem impossible to keep up with demand. And manual inspections are not only practically impossible at this scale, but also unreliable, imprecise, costly, and slow.

From the automotive industry to manufacturing and more, quality is an important factor for success. According to the American Society for Quality, organizations can lose up to 20% of annual sales revenue due to poor quality control.

This is where automation of visual inspection comes in. Computer vision plays a big part in deploying these solutions, making it possible to process the amounts of data from this automation.

In this article, we’ll explore the importance of automating visual inspection, its challenges, uses cases and more, including:

  • Important use cases of computer vision for automating visual inspection
  • Business applications seeing value from automated inspections
  • What are the implementation challenges?
  • Recent customer projects at Levatas
  • Tips and recommendations from Levatas
  • Final thoughts

Important use cases of computer vision for automating visual inspection

A finger pointing at data comprised of numbers on a computer screen

A decision-based model uses deep learning and machine learning technology in order to analyze real-time data and automatically flag issues. It processes the images, text, videos, and more by reading and analyzing the data and comparing it with trained data sets.

Computer vision provides image-based automatic inspections with lighting, processor, software, output devices and, of course, cameras, or other similar devices. It’s then possible to:

1. Batch post-process visual data

A drone, robot, or fixed camera that keeps performing a mission, goes out on a route and captures thousands of images, which are then stored centrally. The automated visual inspection solution is then able to process the data after the fact by looking for anomalies that could be broken equipment, thermal anomalies, or other. Essentially, post-processing is done after the mission is completed.

2. Route plan and optimize

With route planning and optimization, the goal is to create an automated route for the robot or drone to make sure that it’s capturing the right footage. Computer vision analyses the footage, either in real-time or after the fact, and makes adjustments to the end optimizations of the route.

3. Real-time mission optimization

Instead of a static route or a static set of cameras capturing data, running models locally and in real-time on the capture device lets you adjust and optimize the mission any time. From having drones zooming in to get a closer image of a part that needs to be inspected to having a spot robot changing its route due to obstacles, computer vision powers real-time decisions.

But how do you automate your visual inspection processes?

Step 1: Stating the problem. What do you need the system to detect?

Step 2: Gathering and preparing your data. You’ll need high-quality data to train a model.

Step 3: Developing the deep learning model. You can use a development service, a pre-trained model, or even build it from scratch.

Step 4: Training and evaluating the model. How accurate is your model?

Step 5: Deploying and improving. This is where accuracy can be improved for smarter models.

Business applications seeing value from automated inspections

Two business men having a meeting in the park

Reading analog gauges, example use cases:

  • Detection in inventory management
  • Monitoring and recording of gauge readings throughout an industrial environment (eliminating the need for manual review)
  • Can be combined with an anomaly model to detect anomalous readings

Being able to detect gauges and read them on the fly lets organizations digitize part of their infrastructure that historically has needed people to manually do. Some organizations have dozens, and sometimes thousands, of analog gauges which can’t always be replaced with digital ones because it’ll cost hundreds of dollars per linear foot - and that’s just to power the gauges.

After hours facility patrol, example use cases

  • Robots, drones, cameras, or combination to look for out-of-place people, animals, equipment, or more

Change detection

  • Detection of corrosion or rust
  • Detection of thermal anomalies
  • Detection of anomalous or faulty states within equipment, where deviations of the equipment’s appearance needs to be identified, so workers can manually check them.
  • Detection of missing or misplaced safety equipment
  • Detection of cracked, frays, or defect on surfaces (walls, bricks, windows, and more)

Thermal anomaly detection

  • Identification of equipment that could be overheating or inclined to overheat in the future
  • Monitor thermal signatures over time for multiple locations throughout a facility to identify potentially problematic patterns
  • The aim s to capture these thermal anomalies and record them back in a central location or raise an alert for a worker that can manually check the issue

Motion detection

  • Identify objects, people, animals, etc., in the field of view of a robot or a drone
  • Determine if a large object is moving in a protected area and alert appropriate security personnel
  • Identify a moving forklift, golf cart, or other vehicle that could threaten robots on an automated mission and take appropriate action
  • Identify equipment moving in an abnormal pattern

What are the implementation challenges?

Obstacle course with a close up of a barrier

Implementing a valuable inspection solution isn’t just about identifying the right use cases, because there needs to be a variety of pieces coming together. These are:

1. Cloud platform

Where data is usually stored, inferences are happening, and where analytics are running. The issue is knowing how to integrate automated inspections into cloud and other systems, so they all work together in harmony.

2. Data capture

Whether it’s pulling satellite photography, fixed wing drones, fixed cameras, robots, or phones, it’s essential to have the right mix of hardware to capture the imagery or the sensor data used for the inspections.

3. Enterprise integration

It’s important to consider how to take the data and deliver it in the right format to the business’s hands. This can be done as alerts, emails, BI dashboard, analytics dashboard, or integration with a ticket management system.

4. AI/CV model

AI and computer vision models automate the inspection of the captured footage and raise the alerts. Tools like TensorFlow or Azure, or services from AWZ building can be used for this automation.

5. AI lifecycle management

Managing the models over time, improving their performance, rolling out new versions - these are all vital issues that need to be considered when automating inspections.

6. Human collaboration

The automated system needs to interact with organizations’ employees and experts. Taking their feedback and looping it back into the system helps it learn and improve. But to do this, all of these pieces need to work together.

Recent customer projects at Levatas

Business board with dozens of post-its, a person is writing on the board

Electrical component fault detection

One of the world’s leading energy companies was looking to automate the inspection of their infrastructure equipment, which consisted of thousands of towers. Each tower had dozens of components, lightning arresters, transformers, splices, and more for the successful and seamless power delivery.

They wanted to run a drone program to automate the capture of photos and videos of all this equipment and use AI models to inspect it after the fact. They could then identify and classify the types of each component that's present on the grid, and look for any potentially faulty equipment that could need review.

With potentially millions of components, manually inspecting isn’t feasible. Levatas helped them create an end-to-end solution involving running automated inspection models on drones, which modified the drones’ behavior on the fly, to capture the right quality of picture for each component that could then be identified and checked for damage.

The solution was integrated into their cloud environment, which was AWS, and a tool was created, so they could collect and report any potential damages to their SMEs, collect their feedback, and loop it back into the model. This allowed the model to eliminate false positives and get smarter over time.

Hazardous vehicle detection

When a Spot robot walks through extremely busy manufacturing facilities, forklifts, golf carts, or other moving vehicles become potential hazards. Spot can identify obstacles to prevent collision, but moving objects are trickier - these robots would need a different type of intelligence to step out of the way.

Levatas is working with multiple manufacturers to develop automated tools that will let Spot identify these moving objects and modify its behavior, like stopping at a crosswalk and letting moving objects cross first to avoid collisions.

Gauge detection

Companies have thousands of analog gauges, which have many employees walking around to manually check them because there aren’t efficient digital versions to replace them with.

Levatas is working with multiple industrial customers to use a combination of Spot robots, fixed cameras, and drones to capture gauges’ readings, store them centrally in a central database, and identify overall patterns and trends for the gauges.

Historical readings are recorded and ranges defined, so that any abnormal reading notifies experts to take a look at that particular gauge and then take action. This solution can do all of its performance and computation on the fly, on the edge.

Adaptive missions

Levatas is working with a customer that needed Spot to perform multiple types of inspections along a single autonomous route, and then modify that route based on real-time feedback from the models running on Spot.

In this case, Levatas was running an analogue gauge model and a ‘man down’ detection model, where Spot was looking for instances in which a person may have been injured on a job site, alongside Spot’s normal gauge reading route.

Spot is able to branch his logic and make decisions on the fly at each checkpoint, because it’s running models on board in real time. If Spot sees a gauge that reads outside the normal range, it will change its behavior - and similarly with the ‘man down’ detection model, in which Spot notifies the appropriate party to get help.

Tips and recommendations from Levatas

Red post-its with 'to do' written on them against a leafy wall

So, what should you keep in mind when starting your journey in automating visual inspection?

1. Choose your use case carefully and strategically

  • Consider data availability
  • Consider management/organizational buy-in
  • Choose a problem where a partial solution is valuable

2. Set realistic expectations up front

  • Avoid both “hype” and “fear” extremes
  • Don’t expect perfection
  • Don’t expect full automation

3. Define your success criteria carefully

  • Think carefully about how to measure success
  • Don’t default to tech performance metrics (precision, recall, F1 score, etc)
  • Set measurable success milestones

4. Don’t wait until your solution is perfect to release it

  • If you’ve defined incremental success well, ship at the first success milestone
  • This gets value earlier, builds trust, and gets critical feedback

5. Use human feedback to improve performance

  • All human reviews are another data point to improve future performance
  • This is true of all data, but especially for instances where SME/AI disagree
  • Build a feedback loop and automate as much as possible

Final thoughts

Based on traditional computer vision methods, it uses a deep learning model to continuously improve its accuracy and make sure that businesses can run as efficiently and smoothly as possible. Using AI to automate visual inspection can be extremely valuable for businesses to improve both quality control and production, while still being cost-effective.

Article based on 'Navigating the complex landscape of automating visual inspection’

A talk by Daniel Bruce, Chief Product Officer, Levatas, at the Computer Vision Festival 2021.

[watch the full talk here ]

Fancy watching more content from AI experts? Why not sign up to our membership today?