Cloud based scalable object recognition from video streams using orientation fusion and convolutional neural networks. (January 2022)
- Record Type:
- Journal Article
- Title:
- Cloud based scalable object recognition from video streams using orientation fusion and convolutional neural networks. (January 2022)
- Main Title:
- Cloud based scalable object recognition from video streams using orientation fusion and convolutional neural networks
- Authors:
- Usman Yaseen, Muhammad
Anjum, Ashiq
Fortino, Giancarlo
Liotta, Antonio
Hussain, Amir - Abstract:
- Highlights: This paper pioneers the use of empirical mode decomposition with CNNs, to improve visual object recognition accuracy on challenging video datasets. We study the orientation, phase and amplitude components and show their performance in terms of visual recognition accuracy. We show that the orientation component is a good candidate to achieve high object recognition accuracy, for illumination- and expression-variant video datasets. We propose a feature-fusion strategy of the orientation components to further improve the accuracy rates. We show that the orientation-fusion approach significantly improves the visual recognition accuracy, under challenging conditions. Abstract: Object recognition from live video streams comes with numerous challenges such as the variation in illumination conditions and poses. Convolutional neural networks (CNNs) have been widely used to perform intelligent visual object recognition. Yet, CNNs still suffer from severe accuracy degradation, particularly on illumination-variant datasets. To address this problem, we propose a new CNN method based on orientation fusion for visual object recognition. The proposed cloud-based video analytics system pioneers the use of bi-dimensional empirical mode decomposition to split a video frame into intrinsic mode functions (IMFs). We further propose these IMFs to endure Reisz transform to produce monogenic object components, which are in turn used for the training of CNNs. Past works have demonstratedHighlights: This paper pioneers the use of empirical mode decomposition with CNNs, to improve visual object recognition accuracy on challenging video datasets. We study the orientation, phase and amplitude components and show their performance in terms of visual recognition accuracy. We show that the orientation component is a good candidate to achieve high object recognition accuracy, for illumination- and expression-variant video datasets. We propose a feature-fusion strategy of the orientation components to further improve the accuracy rates. We show that the orientation-fusion approach significantly improves the visual recognition accuracy, under challenging conditions. Abstract: Object recognition from live video streams comes with numerous challenges such as the variation in illumination conditions and poses. Convolutional neural networks (CNNs) have been widely used to perform intelligent visual object recognition. Yet, CNNs still suffer from severe accuracy degradation, particularly on illumination-variant datasets. To address this problem, we propose a new CNN method based on orientation fusion for visual object recognition. The proposed cloud-based video analytics system pioneers the use of bi-dimensional empirical mode decomposition to split a video frame into intrinsic mode functions (IMFs). We further propose these IMFs to endure Reisz transform to produce monogenic object components, which are in turn used for the training of CNNs. Past works have demonstrated how the object orientation component may be used to pursue accuracy levels as high as 93%. Herein we demonstrate how a feature-fusion strategy of the orientation components leads to further improving visual recognition accuracy to 97%. We also assess the scalability of our method, looking at both the number and the size of the video streams under scrutiny. We carry out extensive experimentation on the publicly available Yale dataset, including also a self generated video datasets, finding significant improvements (both in accuracy and scale), in comparison to AlexNet, LeNet and SE-ResNeXt, which are three most commonly used deep learning models for visual object recognition and classification. … (more)
- Is Part Of:
- Pattern recognition. Volume 121(2022)
- Journal:
- Pattern recognition
- Issue:
- Volume 121(2022)
- Issue Display:
- Volume 121, Issue 2022 (2022)
- Year:
- 2022
- Volume:
- 121
- Issue:
- 2022
- Issue Sort Value:
- 2022-0121-2022-0000
- Page Start:
- Page End:
- Publication Date:
- 2022-01
- Subjects:
- Scalable video anaytics -- Feature fusion -- Object orientation -- Object recognition -- Convolutional neural networks -- Cloud-based video analytics
Pattern perception -- Periodicals
Perception des structures -- Périodiques
Patroonherkenning
006.4 - Journal URLs:
- http://www.sciencedirect.com/science/journal/00313203 ↗
http://www.sciencedirect.com/ ↗ - DOI:
- 10.1016/j.patcog.2021.108207 ↗
- Languages:
- English
- ISSNs:
- 0031-3203
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 23804.xml