ISSN: 2320-2459

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Hybrid Neural Network for Integrated Feature Extraction in Infrasound Event Classification

Hongru Li, Xihai Li*

Department of Nuclear Engineering, Rocket Force University of Engineering, Xi an, China

*Corresponding Author:
Xihai Li
Department of Nuclear Engineering, Rocket Force University of Engineering, Xi an, China
Email: xihai_li@163.com

Received: 16-Jul-2024, Manuscript No. JPAP-24-141888; Editor assigned: 18-Jul -2024, PreQC No. 24-141888 (PQ); Reviewed: 01-Aug-2024, QC No. JPAP-24-141888; Revised: 07-Aug-2024, Manuscript No. JPAP-24-141888 (R); Published: 14-Aug-2024, DOI: 10.4172/2320-2459.12.03.002.

Citation: Li H, et al. Hybrid Neural Network for Integrated Feature Extraction in Infrasound Event Classification. Res Rev J Pure Appl Phys. 2024;12:002.

Copyright: © 2024 Li H, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Research & Reviews: Journal of Pure and Applied Physics

Description

Infrasound signals refer to sound waves with frequencies below the lower threshold of human hearing, approximately 20 Hz. Several major natural disasters, such as tsunamis, hurricanes, earthquakes, landslides and debris flows, can emit infrasound signals. These signals can be transmitted over long distances in the air. Thus, the classification and recognition of infrasound events are of foremost importance for understanding geophysical activities, predicting natural disasters, environmental monitoring and safeguarding national security.

To date, a considerable amount of research on infrasound classification has been conducted globally. The method mainly includes two aspects

• The first involves feature extraction of the signals to quantify the differences in the events and then classify and recognize the signals.

• The second involves the use of deep learning for the automatic feature extraction of infrasound signals for classification and recognition.

Regarding the first method, the feature extraction process is complex and pinpointing the appropriate features for classification requires not only pertinent prior knowledge but also the deployment of robust feature selection algorithms. This certainly adds to the challenge of the feature selection process. As science and technology continue to advance, the application of the second method has remarkable application potential. However, almost all existing second methods use a single type of convolutional neural network for feature extraction, which results in incomplete features, especially insensitivity to the temporal features of infrasound signals, thereby missing the physical information of the signal itself and affecting the final classification results. An infrasound event classification fusion model based on multiscale Squeeze Excitation Convolutional Neural Network Bidirectional Long Short-Term Memory network (SE–CNN–Bi-LSTM) was introduced in this paper to address these problems. This proposed model used multiscale Convolutional Neural Networks (CNN) to automatically extract the signal spatial features and emphasized important features and suppressed unimportant ones to adaptively assign weights based on the Squeeze-and-Excitation (SE) mechanism.

The Bi-LSTM network was then used to automatically extract signal time-dependent features. Multiscale CNN focuses on signal space features to address the problems encountered in the selection of convolution kernels of appropriate size to extract different features of infrasound signals by using multiple convolution kernels of different scales to filter information at the same level of the network and then combining the filtered information. Therefore, the network can adaptively select appropriate receptive fields, enabling the dynamic selection of the most suitable field for a given task by the network. This adaptability improves the capability of the network to handle different scales and enriches the extracted feature set, leading to highly comprehensive spatial information. By contrast, Bi-LSTM focuses on signal temporal features that combines past and future information. Consequently, the feature information extracted from infrasound signals by a single network would be incomplete, insufficient and ineffective, resulting in low classification accuracy.

The key point of this model is that it comprehensively considers the two methods and realizes the effective fusion of the two types of features through a fully connected layer, allowing the model to utilize the advantages of each single network in processing infrasound time-series data and solving both problems effectively while further improving classification accuracy. Compared with the traditional infrasonic classification method, the multiscale SE–CNN–Bi-LSTM fusion model automatically extracted features, saving substantial manpower and material resources and effectively improving the classification efficiency. It provides a valuable idea and a novel method for the classification of infrasound events.