N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization. (22nd November 2022)
- Record Type:
- Journal Article
- Title:
- N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization. (22nd November 2022)
- Main Title:
- N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization
- Authors:
- Umair, Muhammad
Alam, Iftikhar
Khan, Atif
Khan, Inayat
Ullah, Niamat
Momand, Mohammad Yusuf - Other Names:
- Khan Rahim Academic Editor.
- Abstract:
- Abstract : The extractive summarization approach involves selecting the source document's salient sentences to build a summary. One of the most important aspects of extractive summarization is learning and modelling cross-sentence associations. Inspired by the popularity of Transformer-based Bidirectional Encoder Representations (BERT) pretrained linguistic model and graph attention network (GAT) having a sophisticated network that captures intersentence associations, this research work proposes a novel neural model N-GPETS by combining heterogeneous graph attention network with BERT model along with statistical approach using TF-IDF values for extractive summarization task. Apart from sentence nodes, N-GPETS also works with different semantic word nodes of varying granularity levels that serve as a link between sentences, improving intersentence interaction. Furthermore, proposed N-GPETS becomes more improved and feature-rich by integrating graph layer with BERT encoder at graph initialization step rather than employing other neural network encoders such as CNN or LSTM. To the best of our knowledge, this work is the first attempt to combine the BERT encoder and TF-IDF values of the entire document with a heterogeneous attention graph structure for the extractive summarization task. The empirical outcomes on benchmark news data sets CNN/DM show that the proposed model N-GPETS gets favorable results in comparison with other heterogeneous graph structures employing the BERTAbstract : The extractive summarization approach involves selecting the source document's salient sentences to build a summary. One of the most important aspects of extractive summarization is learning and modelling cross-sentence associations. Inspired by the popularity of Transformer-based Bidirectional Encoder Representations (BERT) pretrained linguistic model and graph attention network (GAT) having a sophisticated network that captures intersentence associations, this research work proposes a novel neural model N-GPETS by combining heterogeneous graph attention network with BERT model along with statistical approach using TF-IDF values for extractive summarization task. Apart from sentence nodes, N-GPETS also works with different semantic word nodes of varying granularity levels that serve as a link between sentences, improving intersentence interaction. Furthermore, proposed N-GPETS becomes more improved and feature-rich by integrating graph layer with BERT encoder at graph initialization step rather than employing other neural network encoders such as CNN or LSTM. To the best of our knowledge, this work is the first attempt to combine the BERT encoder and TF-IDF values of the entire document with a heterogeneous attention graph structure for the extractive summarization task. The empirical outcomes on benchmark news data sets CNN/DM show that the proposed model N-GPETS gets favorable results in comparison with other heterogeneous graph structures employing the BERT model and graph structures without the BERT model. … (more)
- Is Part Of:
- Computational intelligence and neuroscience. Volume 2022(2022)
- Journal:
- Computational intelligence and neuroscience
- Issue:
- Volume 2022(2022)
- Issue Display:
- Volume 2022, Issue 2022 (2022)
- Year:
- 2022
- Volume:
- 2022
- Issue:
- 2022
- Issue Sort Value:
- 2022-2022-2022-0000
- Page Start:
- Page End:
- Publication Date:
- 2022-11-22
- Subjects:
- Neurosciences -- Data processing -- Periodicals
Computational intelligence -- Periodicals
Computational neuroscience -- Periodicals
612.80285 - Journal URLs:
- https://www.hindawi.com/journals/cin/ ↗
- DOI:
- 10.1155/2022/6241373 ↗
- Languages:
- English
- ISSNs:
- 1687-5265
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library HMNTS - ELD Digital store
- Ingest File:
- 24664.xml