- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- An ensemble automatic modulation classification model...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
An ensemble automatic modulation classification model with weight pruning and data preprocessing Yang, Xueting
Abstract
Automatic Modulation Classification (AMC) detects the modulation type and order of the received signal using limited prior knowledge within a short observation interval. In this thesis, we aim to provide a computation-efficient and high-performance AMC model for resource-constrained mobile devices. We use a public RadioML dataset and introduce three data pre-processing methods including noise reduction, normalization, and label smoothing before training the raw signals. Besides four common signal representations, we propose a new signal representation called a three-dimensional constellation image. For each signal representation, we carefully design a Deep Learning (DL) model. In addition to the traditional Convolutional Neural Network (CNN), two new AMC model structures are proposed. The attention module is integrated into the AMC model structure based on conventional Long Short-term Memory (LSTM) networks. Another proposed AMC model structure connects CNN, LSTM, and densely connected neural networks with two additional connections. After training the AMC models, we analyze the overall and per-class performance. We also study the computational complexity of trained AMC models in terms of memory consumption and detection efficiency. Overall, the results indicate that the proposed data pre-processing methods and the new AMC model structures can significantly improve the classification performance. To reduce the complexity of proposed AMC models, we introduce weight pruning to remove unnecessary connections in DL models. After weight pruning, the proposed AMC models have negligible performance degradation. To further improve the performance of AMC models, we also propose ensemble learning to train a second-level model based on multiple first-level AMC models. With three-fold cross-validation, the second-level model can train on the whole dataset and have an F1-score improvement of at least 10%. We also conduct weight pruning to reduce the unnecessary parameters of the ensemble learned model. Overall, after weight pruning, the ensemble learned AMC model receives an F1-score of 0.965 when the signal-to-noise ratio is greater than 6 dB.
Item Metadata
Title |
An ensemble automatic modulation classification model with weight pruning and data preprocessing
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2020
|
Description |
Automatic Modulation Classification (AMC) detects the modulation type and order of the received signal using limited prior knowledge within a short observation interval.
In this thesis, we aim to provide a computation-efficient and high-performance AMC model for resource-constrained mobile devices.
We use a public RadioML dataset and introduce three data pre-processing methods including noise reduction, normalization, and label smoothing before training the raw signals.
Besides four common signal representations, we propose a new signal representation called a three-dimensional constellation image.
For each signal representation, we carefully design a Deep Learning (DL) model.
In addition to the traditional Convolutional Neural Network (CNN), two new AMC model structures are proposed.
The attention module is integrated into the AMC model structure based on conventional Long Short-term Memory (LSTM) networks.
Another proposed AMC model structure connects CNN, LSTM, and densely connected neural networks with two additional connections.
After training the AMC models, we analyze the overall and per-class performance.
We also study the computational complexity of trained AMC models in terms of memory consumption and detection efficiency.
Overall, the results indicate that the proposed data pre-processing methods and the new AMC model structures can significantly improve the classification performance.
To reduce the complexity of proposed AMC models,
we introduce weight pruning to remove unnecessary connections in DL models.
After weight pruning, the proposed AMC models have negligible performance degradation.
To further improve the performance of AMC models, we also propose ensemble learning to train a second-level model based on multiple first-level AMC models.
With three-fold cross-validation, the second-level model can train on the whole dataset and have an F1-score improvement of at least 10%.
We also conduct weight pruning to reduce the unnecessary parameters of the ensemble learned model.
Overall, after weight pruning, the ensemble learned AMC model receives an F1-score of 0.965 when the signal-to-noise ratio is greater than 6 dB.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2020-02-12
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0388609
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2020-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International