Data-driven fault diagnosis plays a key role in reducing maintaining costs and reducing down time for industrial machines.
Deep learning has shown promising performance in identifying the different fault types. Yet, large amount of data is required to achieve satisfactory performance. In the real world,
fault data is often rare, thus there is incentive for different
corporations to work together to train a fault detection model.
However, sharing data between different factories may not
be applicable due to the data privacy concerns. Besides, distribution of data collected from different entities can be non
i.i.d. As a results, a model trained on one machine can fail
to generalise to different machines due to the distribution
shift problem. In this work, we propose DiagNet, a federated transfer learning framework for machine fault diagnosis
tasks. Specifically, to address the data privacy concerns, we
employ the federated learning approach by jointly training a
global model across multiple clients without sharing their raw
data. However, the global model does not perform the best
for each of the clients due to data distribution variances. To
further tackle this problem, we employ the transfer learning
approach to adapt the global model separately on each client
with his own private machine data. Experimental results under low data regimes show that our DiagNet framework can
significantly improve the fault-diagnosis model training accuracy by up to 28%.
This research / project is supported by the A*STAR - Advanced Manufacturing and Engineering (AME) Programmatic Programme
Grant Reference no. : A19E3b0099