- LSF ID
- 60157
- ORCID
- 0000-0001-6377-0940
- Sonstiges
- der Hochschule zugeordnete*r Autor*in
- GND
- 123292786
- LSF ID
- 47899
- ORCID
- 0000-0001-7535-870X
- Sonstiges
- der Hochschule zugeordnete*r Autor*in
Abstract in Englisch:
Trust has been recognized as a central variable to explain the resistance to using automated systems (under-trust) and the overreliance on automated systems (over-trust). To achieve appropriate reliance, users' trust should be calibrated to reflect a system's capabilities. Studies from various disciplines have examined different interventions to attain such trust calibration. Based on a literature body of 1000+ papers, we identified 96 relevant publications which aimed to calibrate users' trust in automated systems. To provide an in-depth overview of the state-of-the-art, we reviewed and summarized measurements of the trust calibration, interventions, and results of these efforts. For the numerous promising calibration interventions, we extract common design choices and structure these into four dimensions of trust calibration interventions to guide future studies. Our findings indicate that the measurement of the trust calibration often limits the interpretation of the effects of different interventions. We suggest future directions for this problem.