Chadha, Gavneet Singh; Shah, Sayed Rafay Bin; Schwung, Andreas; Ding, Steven X.:
Shared Temporal Attention Transformer for Remaining Useful Lifetime Estimation
In: IEEE Access, Band 10 (2022), S. 74244 - 74258
2022Artikel/Aufsatz in ZeitschriftOA Gold
ElektrotechnikFakultät für Ingenieurwissenschaften » Elektrotechnik und Informationstechnik » Automatisierungstechnik und komplexe Systeme
Damit verbunden: 1 Publikation(en)
Titel in Englisch:
Shared Temporal Attention Transformer for Remaining Useful Lifetime Estimation
Autor*in:
Chadha, Gavneet Singh
Sonstiges
korrespondierende*r Autor*in
;
Shah, Sayed Rafay Bin
;
Schwung, Andreas
;
Ding, Steven X.UDE
GND
134302427
LSF ID
2347
ORCID
0000-0002-5149-5918ORCID iD
Sonstiges
der Hochschule zugeordnete*r Autor*in
Erscheinungsjahr:
2022
Open Access?:
OA Gold
IEEE ID
Web of Science ID
Scopus ID
Notiz:
CA extern
Sprache des Textes:
Englisch
Schlagwort, Thema:
Attention ; Decoding ; Deep learning ; Degradation ; Estimation ; Feature extraction ; Neural networks ; Prognostics and health management ; Remaining useful lifetime estimation ; Time series analysis ; Transformer architecture ; Transformers

Abstract in Englisch:

This paper proposes a novel deep learning architecture for estimating the remaining useful lifetime (RUL) of industrial components, which solely relies on the recently developed transformer architectures. The RUL estimation resorts to analysing degradation patterns within multivariate time series signals. Hence, we propose a novel shared temporal attention block that allows detecting RUL patterns with the progress of time. Furthermore, we develop a split-feature attention block that enables attending to features from different sensor channels. The proposed shared temporal attention layer in the encoder fulfils the goal of attending to temporal degradation patterns in the individual sensor signals before creating a shared correlation across the feature range. We develop two transformer architectures that are specifically designed to operate with multivariate time series data based on these novel attention blocks. We apply the architectures to the well known C-MAPSS benchmark dataset and provide various hyperparameter studies to analyse their impact on the performance. In addition, we provide a thorough comparison with recently presented state-of-the-art approaches and show that the proposed transformer architectures outperform the existing methods by a considerable margin.