UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Model-based machine learning techniques for fiber nonlinearity compensation Luo, Shenghang


The demand for faster high-volume data transmission over fiber optical long-haul links has been ever-increasing. This has been driving the need for further enhancing the data rates that can be carried by already deployed fibers. This is a non-trivial task as the Kerr effect causes nonlinear distortions to grow with increasing transmission power. Therefore, we are faced with the unusual situation that the effective signal-to-noise ratio decreases as the transmit power increases. Conventional nonlinear compensation methods, such as digital backpropagation (DBP) and perturbation theory-based nonlinearity compensation (PB-NLC), attempt to compensate for the nonlinearity by approximating analytical solutions to the signal propagation over fibers. However, their performances are limited by model mismatch and computational complexity. Recently, machine learning (ML) techniques have been used for the optimization of parameters of model-based approaches, which traditionally have been determined analytically from physical models. In the context of optical fiber transmission, it has been shown that ML-aided model-based approaches have improved performance and/or reduced complexity. In this thesis, we consider two specific ML-aided model-based nonlinear compensation approaches: learned DBP (LDBP) and learned PB-NLC. In our first contribution, starting from LDBP proposed in the existing literature, we propose a novel perturbation theory-aided learned digital backpropagation method. The key insight is that the number of steps of LDBP can significantly be decreased by augmenting each step with a filter response, as suggested by perturbation theory. We demonstrate that our proposed approach outperforms existing LDBP in terms of both performance and complexity. Our second contribution concerns the learned PB-NLC. We conduct a comprehensive performance-complexity analysis for various learned and non-learned PB-NLC approaches presented in the literature, utilizing state-of-the-art complexity reduction methods to map out the performance-complexity trade-off among them. Our results show that least squares-based PB-NLC with clustering quantization has the best performance-complexity trade-off. We advance the state-of-the-art of learned PB-NLC by developing a bi-directional recurrent neural network for generating features that are alike those obtained from perturbation theory and are used as input for learned nonlinearity compensation. We demonstrate that our proposed feature learning network achieves a similar performance as least-squares PB-NLC, but with a reduced complexity.

Item Citations and Data


Attribution-NonCommercial-NoDerivatives 4.0 International