Chang, R., Jie, W., Thakur, N., Zhao, Z., Pahwa, R. S., & Yang, X. (2024, May 28). A Unified and Adaptive Continual Learning Method for Feature Segmentation of Buried Packages in 3D XRM Images. 2024 IEEE 74th Electronic Components and Technology Conference (ECTC). https://doi.org/10.1109/ectc51529.2024.00314
Abstract:
AI Deep learning methods have been recently applied to defect detection and metrology tasks for buried structures. They process 3D X-ray scans and provide a non-destructive inspection increasing the efficiency and speed of the process. However, such deep learning models are very data-driven and are trained on specifically collected and annotated data. In packaging or wafer fabrication processes, 3D X-ray scans may be very different depending on the parameters used. It can also vary over time and because of different purposes. To extend the accuracy of the model on newer data, the usual process is to comprehensively train the model with new data as well as old data. This extension is very cost-expensive and time-consuming since it includes another round of annotation and complete training. Basically, the inital training process has to be repeated again as many times as new data comes in. In this paper, we introduce a new AI model that is capable of learning new data without the need of using older data. It allows a flexible and cost-effective approach to using AI in inspection processes. reducing data management and storage costs. Annotations on new data are only required for new classes and training time and requirements are reduced. Efficiency (time-to-production) and applicability of our models is then greatly increased. We will focus on semantic segmentation that will help to identify the different components (e.g. voids) and be extendable to new data such as new defect or component (e.g. Solder). Our method introduces a new architecture with a dedicated loss function for continual learning. Experimental results show that our model is able to maintain the segmentation accuracy across all old and new data and it can provide the same performance compared to a full training with all data with a 50% reduction in the data used and 60% reduction in training time.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - MTC Programmatic Fund
Grant Reference no. : M23L7b0021
This research / project is supported by the A*STAR - CDF
Grant Reference no. : C210812046