Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project. Issue 4 (17th January 2022)
- Record Type:
- Journal Article
- Title:
- Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project. Issue 4 (17th January 2022)
- Main Title:
- Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project
- Authors:
- Ferguson, William
Batra, Dhruv
Mooney, Raymond
Parikh, Devi
Torralba, Antonio
Bau, David
Diller, David
Fasching, Josh
Fiotto‐Kaufman, Jaden
Goyal, Yash
Miller, Jeff
Moffitt, Kerry
Montes de Oca, Alex
Selvaraju, Ramprasaath R.
Shrivastava, Ayush
Wu, Jialin
Lee, Stefan - Other Names:
- Gunning Dave guestEditor.
Vorm Eric guestEditor.
Wang Jennifer Yunyan guestEditor.
Turek Matt guestEditor. - Abstract:
- Abstract: This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep‐learning‐based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy. Abstract : Under the Explainable Artificial Intelligence Project, we demonstrated limited, positive effects on users from statically presenting explanations along with the system's answers—for example, whenAbstract: This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep‐learning‐based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy. Abstract : Under the Explainable Artificial Intelligence Project, we demonstrated limited, positive effects on users from statically presenting explanations along with the system's answers—for example, when teaching people to identify bird species. We then illustrated how interacting via explanations could enable people to task and adapt machine learning (ML) systems. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy. … (more)
- Is Part Of:
- Applied AI Letters. Volume 2:Issue 4(2021)
- Journal:
- Applied AI Letters
- Issue:
- Volume 2:Issue 4(2021)
- Issue Display:
- Volume 2, Issue 4 (2021)
- Year:
- 2021
- Volume:
- 2
- Issue:
- 4
- Issue Sort Value:
- 2021-0002-0004-0000
- Page Start:
- n/a
- Page End:
- n/a
- Publication Date:
- 2022-01-17
- Subjects:
- explainable artificial intelligence (XAI) -- human/computer interaction (HCI) -- tasking and adapting agents -- visual question answering (VQA)
006.3 - Journal URLs:
- http://onlinelibrary.wiley.com/ ↗
- DOI:
- 10.1002/ail2.60 ↗
- Languages:
- English
- ISSNs:
- 2689-5595
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 20374.xml