Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. (August 2022)
- Record Type:
- Journal Article
- Title:
- Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. (August 2022)
- Main Title:
- Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts
- Authors:
- Formosa, Paul
Rogers, Wendy
Griep, Yannick
Bankins, Sarah
Richards, Deborah - Abstract:
- Abstract: Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients' perceptions of (un)dignified treatment. We explore this issue through an experimental vignette study comparing individuals' perceptions of being treated in a dignified and respectful way in various healthcare decision contexts. Participants were subject to a 2 (human or AI decision maker) x 2 (positive or negative decision outcome) x 2 (diagnostic or resource allocation healthcare scenario) factorial design. We found evidence of a "human bias" (i.e., a preference for human over AI decision makers) and an "outcome bias" (i.e., a preference for positive over negative outcomes). However, we found that for perceptions of respectful and dignified interpersonal treatment, it matters more who makes the decisions in diagnostic cases and it matters more what the outcomes are for resource allocation cases. We also found that humans were consistently viewed as appropriate decision makers and AI was viewed as dehumanizing, and that participants perceived they were treated better when subject to diagnostic as opposed to resource allocation decisions.Abstract: Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients' perceptions of (un)dignified treatment. We explore this issue through an experimental vignette study comparing individuals' perceptions of being treated in a dignified and respectful way in various healthcare decision contexts. Participants were subject to a 2 (human or AI decision maker) x 2 (positive or negative decision outcome) x 2 (diagnostic or resource allocation healthcare scenario) factorial design. We found evidence of a "human bias" (i.e., a preference for human over AI decision makers) and an "outcome bias" (i.e., a preference for positive over negative outcomes). However, we found that for perceptions of respectful and dignified interpersonal treatment, it matters more who makes the decisions in diagnostic cases and it matters more what the outcomes are for resource allocation cases. We also found that humans were consistently viewed as appropriate decision makers and AI was viewed as dehumanizing, and that participants perceived they were treated better when subject to diagnostic as opposed to resource allocation decisions. Thematic coding of open-ended text responses supported these results. We also outline the theoretical and practical implications of these findings. Highlights: Investigates perceptions of dignified treatment when subject to medical decisions. Compared AI vs. human decision makers and positive vs. negative outcomes. Evidence of both a "human bias" and an "outcome bias". Humans seen as appropriate decision makers; AI seen as dehumanizing. Differences between diagnostic and healthcare resource allocation decisions. … (more)
- Is Part Of:
- Computers in human behavior. Volume 133(2022)
- Journal:
- Computers in human behavior
- Issue:
- Volume 133(2022)
- Issue Display:
- Volume 133, Issue 2022 (2022)
- Year:
- 2022
- Volume:
- 133
- Issue:
- 2022
- Issue Sort Value:
- 2022-0133-2022-0000
- Page Start:
- Page End:
- Publication Date:
- 2022-08
- Subjects:
- Artificial intelligence (AI) -- Dignity -- Respect -- Interactional justice -- Medical AI -- Healthcare
Interactive computer systems -- Periodicals
Man-machine systems -- Periodicals
004.019 - Journal URLs:
- http://www.sciencedirect.com/science/journal/07475632 ↗
http://www.elsevier.com/journals ↗ - DOI:
- 10.1016/j.chb.2022.107296 ↗
- Languages:
- English
- ISSNs:
- 0747-5632
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - 3394.921600
British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 21406.xml