The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency. (August 2020)
- Record Type:
- Journal Article
- Title:
- The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency. (August 2020)
- Main Title:
- The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency
- Authors:
- Kraus, Johannes
Scholz, David
Stiegemeier, Dina
Baumann, Martin - Abstract:
- Objective: This paper presents a theoretical model and two simulator studies on the psychological processes during early trust calibration in automated vehicles. Background: The positive outcomes of automation can only reach their full potential if a calibrated level of trust is achieved. In this process, information on system capabilities and limitations plays a crucial role. Method: In two simulator experiments, trust was repeatedly measured during an automated drive. In Study 1, all participants in a two-group experiment experienced a system-initiated take-over, and the occurrence of a system malfunction was manipulated. In Study 2 in a 2 × 2 between-subject design, system transparency was manipulated as an additional factor. Results: Trust was found to increase during the first interactions progressively. In Study 1, take-overs led to a temporary decrease in trust, as did malfunctions in both studies. Interestingly, trust was reestablished in the course of interaction for take-overs and malfunctions. In Study 2, the high transparency condition did not show a temporary decline in trust after a malfunction. Conclusion: Trust is calibrated along provided information prior to and during the initial drive with an automated vehicle. The experience of take-overs and malfunctions leads to a temporary decline in trust that was recovered in the course of error-free interaction. The temporary decrease can be prevented by providing transparent information prior to systemObjective: This paper presents a theoretical model and two simulator studies on the psychological processes during early trust calibration in automated vehicles. Background: The positive outcomes of automation can only reach their full potential if a calibrated level of trust is achieved. In this process, information on system capabilities and limitations plays a crucial role. Method: In two simulator experiments, trust was repeatedly measured during an automated drive. In Study 1, all participants in a two-group experiment experienced a system-initiated take-over, and the occurrence of a system malfunction was manipulated. In Study 2 in a 2 × 2 between-subject design, system transparency was manipulated as an additional factor. Results: Trust was found to increase during the first interactions progressively. In Study 1, take-overs led to a temporary decrease in trust, as did malfunctions in both studies. Interestingly, trust was reestablished in the course of interaction for take-overs and malfunctions. In Study 2, the high transparency condition did not show a temporary decline in trust after a malfunction. Conclusion: Trust is calibrated along provided information prior to and during the initial drive with an automated vehicle. The experience of take-overs and malfunctions leads to a temporary decline in trust that was recovered in the course of error-free interaction. The temporary decrease can be prevented by providing transparent information prior to system interaction. Application: Transparency, also about potential limitations of the system, plays an important role in this process and should be considered in the design of tutorials and human-machine interaction (HMI) concepts of automated vehicles. … (more)
- Is Part Of:
- Human factors. Volume 62:Number 5(2020)
- Journal:
- Human factors
- Issue:
- Volume 62:Number 5(2020)
- Issue Display:
- Volume 62, Issue 5 (2020)
- Year:
- 2020
- Volume:
- 62
- Issue:
- 5
- Issue Sort Value:
- 2020-0062-0005-0000
- Page Start:
- 718
- Page End:
- 736
- Publication Date:
- 2020-08
- Subjects:
- trust in automation -- compliance and reliance -- human-automation interaction -- function allocation -- trust formation
Human engineering -- Periodicals
620.82 - Journal URLs:
- http://hfs.sagepub.com/ ↗
http://www.sagepublications.com/ ↗ - DOI:
- 10.1177/0018720819853686 ↗
- Languages:
- English
- ISSNs:
- 0018-7208
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 13485.xml