You are on page 1of 5

Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Autoencoders Based Digital


Communication Systems
Anjana P K (Student)1 Ameenudeen P E (Assistant Professor)2
Department of ECE College of Engineering Trivandrum Department of ECE College of Engineering Trivandrum
Trivandrum, India Trivandrum, India

Abstract:- To demonstrate over the air transmission, it is been deeply optimised over the past centuries, and it appears
essential to frame, alternate and execute a transmission challenging to compare with them in terms of efficiency, we
systems repressed by neural networks. Autoencoders are are drawn to the conceptual simplification of a transmission
used to train the entire system composed of transmitters network which is to be trained to broadcast over any kind of
and receivers. Estab- lishing a vital novel style of medium with no prior arithmetical modelling and analysis.
thinking regarding communications network design as a
point to point regeneration task that seeks to optimise Tx A DNN that has been programmed to recreate the
and Rx systems into a single process by interpreting a source at the output is referred to as an autoencoder.
transmission system as an autoencoder is performed. Because data should transit through each level, the system
In this, several autoencoders such as deep encoder, should discover a strong depiction of the input signal at each
convolutional autoencoder and a simplest possible level. An auto- encoder is a form of ANN that uses
autoencoder is simulated in Python. Lastly, BLER machine algorithms to develop optimum encoding of
versus Eb/N0 for the (2,2) and (7,4) autoencoder is untrained input. By trying to recreate the data by
plotted. encryption, the code is checked and enhanced. By
instructing the network to disregard inconse- quential
Keywords:- Autoencoder, Deep Learning, End-to-End input (”noise/interference”), the autoencoder creates a
Communication. pattern for a set of information generally for feature
extraction. In this paper, section II describes the autoencoder
I. INTRODUCTION concept, section III gives an explanation of how the
autoencoder is simulated, section IV presents the result
The basic issue of communication involves and finally the conclusion in section V.
”replicating at one end either exactly or almost a signal
selected at some other end”, or, reliably conveying a II. AUTOENCODER CONCEPT
message from a source to a recipient over a medium using
a Tx and a Rx [1]. To obtain a theoretically ideal A channel autoencoder is depicted in Fig. 1. A one-hot
clarification to the given problem in practise, Tx and Rx are vector represents the input symbol. Tx includes multiple
often separated into many computa- tional units, each of thick layers of a FNN. For every encoded input symbol, last
which is dedicated for a certain sub-task, like encoding, thick layer has modified to obtain two values of output that
channel coding, modulating, and equalisation. Despite the depict complex numbers containing real part and an
fact that such an architecture is considered to be suboptimal. imaginary part. The physical terms on x is decided by
It offers the benefit of allowing each element to be normalization layer. An AWGN with an immobile variance
separately studied and tuned, resulting in today’s highly depicts the channel. Here Eb/N0 represents energy (Eb) to
effective and reliable systems. DL routing algorithms, on the noise power spectral density (N0) ratio. The Rx was used as
other hand, return to the basic formulation of the breakdown FNN. Softmax activation is used in the last layer. Every
in communication and strive to optimise transmitter and training instance has a different noise value. In the forward
receiver together without the need for any arbitrarily added pass, a noise layer is used to alter the given signal. It is
block structure. Even though today’s schemes have now rejected by the rearward pass.

Fig 1 Autoencoder Representation

IJISRT24JAN1828 www.ijisrt.com 1719


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
DL (also referred to as structured learning) is a ML ap- III. END TO END LEARNING OF CODED
proach that relies on ANNs and reinforcement learning. SYSTEMS
Learn- ing can be supervised, semi supervised, or not
supervised. We evaluate the effectiveness of an autoencoder to
end- to-end communication to that of traditional data
 Deep Learning transmission using channel coding. A traditional data
ML (particularly DL) serves strong self-operating transmission is com- posed of many pieces for channel
meth- ods in transmission setups for understanding the encoding or decoding, as well as modulation or
information available in the spectra and adjust to movements demodulation. An autoencoder based approach lacks such
within the spectra in order to achieve the duty stated in this explicit blocks, meanwhile attempting to enhance the
ideology. Wireless communication is a combination of a system from end-to-end while conforming to structure
variety of waveforms, channels, congestions, and characteristics. We examine the performance of autoencoder
interference effects, each with an individual complicated models that are analogous to standard channel coded
structure that changes rapidly. Because of the distributed communications systems over the AWGN channel using
character of air transmission channels, data within the these system characteristics.
wireless communications obtain higher magnitude as well as
greater pace and faces through intervention along with many  System Model
other safety hazards. Conven- tional designs and ML tactics To reduce complexity during processing, the blocks are
frequently fail to capture the feeble relation among greatly split into different stages. Coding rate, R = K/N, where N
distributed information spectra and transmission models, repre- sents the output size after channel encoding and
whereas DL develops a possible method of satisfying transmitter, information block of size K bits is fed to
wireless communication system data rate, speed, channel encoder where a block of size N is output after
dependency, and security needs. One motivating example channel encoding and the N. Then the data block is fed to
comes from classifying signals, where a receiver must the modulator of order Mmod where the data bits are
distinguish signals received based on features of the divided into codewords of size kmod = log2(Mmod), and
waveform, such as modulation done at the transmitter that each codeword is mapped to a point in the signal
adds the carrying signal’s data via variation of its constellation with given amplitudes for the I,Q signals. At
parameters. the receiver, the reverse process of the above happens where
incoming symbols are mapped to codewords and the
 End To End Communication codewords are grouped serially to produce the block. It is
Enhanced DNNs are used for transmitting and then fed to the channel decoder where the N bit sized block
receiving in a start-to-finish setup for communication. is converted to K bit sized block after channel decoding
However, the un- known CSI prevents the start-to-finish which is the estimation of the transmitted information block.
setup without knowing by blocking the backward
propagation of errors, which are used for tutoring the
weights of DNNs.

Fig 2 A Standard Communications System Setup made up of Encoding and Decoding Blocks

IJISRT24JAN1828 www.ijisrt.com 1720


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
IV. RESULTS

A simplest possible autoencoder is shown in Fig. 3.


The autoencoder is trained for 50 epochs. Autoencoder may
reach upto a value of validation loss of around 0.09. The
first row is the input and second row is the output. When
looking to the output, a little bit of input is loss with this
basic approach.

Fig 6 Grey Scale Image

Fig 3 Simple Autoencoder

A deep autoencoder is shown in Fig. 4. The


autoencoder is trained for 100 epochs. A loss of test near to
0.10 and train loss of nearly 0.11 is obtained. Autoencoder
may reach upto a value of validation loss of around 0.01 Fig 7 Noisy Image
(Difference between train loss and test loss). The first row is
the input and second row is the output. The output is pretty
similar to that of input when using deep autoencoder.

Fig 8 Denoised Output

Fig 4 Deep Autoencoder For an autoencodder in an end-to-end communication


sys- tem, it attempts to learn representations x of the
A convolutional autoencoder is depicted in Fig. 5. It messages s that are resilient to channel impairments
makes sense when using convolutional neural network since mapping x to y (e.g., noise, fading, distortion, and so on),
the inputs are images. Conv2D and MaxPooling2D layers are such that the delivered message may be retrieved with a low
there in the encoder, whereas there is Conv2D and likelihood of a mistake. In other words, while conventional
UpSampling2D layers in the decoding section. These autoencoders remove redundancy from input data in order to
representations are 8x4x4, so we modify them to 4x32 in compress it, this autoencoder (the ”channel autoencoder”)
such a way that it is able to display them as grayscale frequently adds redundancy, learning an intermediate
images as in Fig. 6. representation that is resistant to channel perturbations [2].

For image denoising, here, a convolutional Figure 9 depicts the learned images x of messages
autoencoder is used. Compared to the previous when (n, k) equals (2,2) as complex constellation
convolutional autoen- coder(Fig. 5), to increase the coordinates, where the x and y axes correlate to the initial
performance of the regenerated output, we will use a small and next transmitted signal correspondingly. Fig. 10 and fig.
different model with high number of filters per each layer. 11 depict a similar comparison, but this time for a (2,2) and
The autoencoder is trained for 100 epochs. A noisy image is (7,4) communications network. Interestingly, when the
obtained as in Fig. 7 and the denoised output is displayed in autoencoder obtains relatively similar BLER as unencrypted
Fig. 8. That is, an autoencoder is used to regenerate the input BPSK for (2,2), it exceeds it for (7,4) over the whole Eb/N0
at its output without any prior knowledge. Image denoising range. This suggests that it has learned some type of
is one of the greatest requirements in the image processing combined coding and modulation strategy that results in
field. coding gain. This solution must be evaluated to a
significantly greater modulation technique em- ploying a
channel code (or the optimum sphere packed in eight
dimensions) for a genuinely fair comparison. A quality mea-
surement comparison for multi-channel types and
parameters (n, k) with varied baselines is beyond the scope
of this work and will have to wait for future research. BER
Performance.
Fig 5 Convolutional Autoencoder

IJISRT24JAN1828 www.ijisrt.com 1721


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Figures 12-14 show simulated BER performances of
different autoencoder models with R = 1/2, 1/3, 1/4 and their
baseline systems with convolutional coding with respective
code rates and BPSK modulation scheme. The selected
block length for baseline system is K = 800 and the
constraint length of the convolutional encoder/decoder is
taken as 7. It can be observed that the BER performance of
the autoencoder improves when message size is increasing.
For a given code rate, M = 2 model has almost the same
performance with uncoded BPSK while M = 256 model
has resulted in a much improved BER performance
closer to the baseline. This improvement is achieved since
the model has more degrees of freedom and more flexibility
for a better end-to-end optimization when the message size
is high.

Fig 9 Constellation Produced by (2,2) Autoencoder

Fig 12 R=1/2 Systems

Fig 10 BLER vs Eb/No for (2,2) Autoencoder

Fig 11 BLER vs Eb/No for (7,4) autoencoder Fig 13 R=1/3 System

IJISRT24JAN1828 www.ijisrt.com 1722


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[5]. S. Do¨rner, S. Cammerer, J. Hoydis, and S. t. Brink,
“Deep learning based communication over the air,”
IEEE Journal of Selected Topics in Signal
Processing, vol. 12, no. 1, pp. 132–143, 2018.
[6]. D. Silver et al., “Mastering the game of go with deep
neural networks and tree search,” Nature, vol. 529,
no. 7587, pp. 484–489, 2016.
[7]. M. Zorzi, A. Zanella, A. Testolin, M. D. F. De
Grazia, and M. Zorzi, “Cognition-based networks:
A new perspective on network optimization using
learning and distributed intelligence,” IEEE Access,
vol. 3, pp. 1512– 1530, 2015.
[8]. M. Abadi et al., “TensorFlow: Large-scale machine
learning on het- erogeneous distributed systems,”
arXiv:1603.04467, 2016. [Online]. Available:
http://tensorflow.org/
[9]. Y.-S. Jeon, S.-N. Hong, and N. Lee, “Blind detection
for MIMO sys- tems with low-resolution ADCs
using supervised learning,” IEEE Int. Conf.
Fig 14 R=1/4 System Commun., May 2016, pp. 1–6, doi:
10.1109/ICC.2017.7997434.
V. CONCLUSION [10]. M. Abadi and D. G. Andersen, “Learning to protect
communications with adversarial neural
Autoencoders are widely used in communication as cryptography,” arXiv:1610.06918, 2016.
well as for image denoising. The concept of autoencoders [11]. I. Goodfellow, Y. Bengio, and A. Courville, Deep
helps to simplify the entire communication system. By the Learning. Cam- bridge, MA, USA: MIT Press, 2016.
invention of autoencoders, programs can be made easily [12]. T. J. O’Shea, J. Corgan, and T. C. Clancy,
without any prior knowledge. Various autoencoders are built “Unsupervised representation learning of structured
in Python and is compared the same with the efficiency radio communication signals,” in Proc. IEEE Int.
while using hamming codes. Various applications to DL in Workshop Sensing, Processing and Learning for
the physical layer is seen in this project. BER comparison of Intelligent Machines (SPLINE), 2016, pp. 1–5.
different autoencoders of R=1/2, 1/3, 1/4 with M=2, 4, 16, [13]. R. Al-Rfou, G. Alain, A. Almahairi et al., “Theano:
256 is observed. A key question is how we can run a A Python framework for fast computation of
structure for an arbitrary medium such that the channel mathematical expressions,” arXiv preprint
model is unknown. Furthermore, it is critical to enable on- arXiv:1605.02688, 2016.
the-fly finetuning in order to adjust the system to shifting [14]. D. George and E. Huerta, “Deep neural networks to
channel circumstances for which it was not previously enable real-time multimessenger astrophysics,”
trained. Another critical challenge is determining how to arXiv preprint arXiv:1701.00008, 2016.
increase the block size in order to improve the autoencoder’s [15]. D. Maclaurin, D. Duvenaud, and R. P. Adams,
efficiency. Overcoming the limitation, we believe some “Gradient-based hy- perparameter optimization
changes can be implemented in the coming phase. through reversible learning,” in Proc. 32nd Int.Conf.
Mach. Learn. (ICML), 2015.
REFERENCES [16]. J. Qadir, K.-L. A. Yau, M. A. Imran, Q. Ni, and A.
V. Vasilakos, “IEEE Access Special Section
[1]. S. Do¨rner, S. Cammerer, J. Hoydis and S. t. Brink, Editorial: Artificial Intelligence Enabled
”Deep Learning Based Communication Over the Networking,” IEEE Access, vol. 3, pp. 3079–3082,
Air,” in IEEE Journal of Selected Topics in Signal 2015.
Processing, vol. 12, no. 1, pp. 132-143, Feb. 2018, [17]. Portilla, Javier, et al. ”Image denoising using scale
doi: 10.1109/JSTSP.2017.2784180. mixtures of Gaus- sians in the wavelet domain.”
[2]. O’shea, T., Hoydis, J. (2017). An introduction to IEEE Transactions on Image processing 12.11
deep learning for the physical layer. IEEE (2003): 1338-1351.
Transactions on Cognitive Communications and [18]. Vincent, Pascal, et al. ”Extracting and composing
Networking, 3(4), 563-575. robust features with denoising autoencoders.”
[3]. F. A. Aoudia and J. Hoydis, “End-to-end learning of Proceedings of the 25th international conference on
communications systems without a channel model,” Machine learning. ACM, 2008.
in 2018 52nd Asilomar Conference on Signals, [19]. Python — Peak Signal-to-Noise Ratio
Systems, and Computers, pp. 298–303, IEEE, 2018. (PSNR). Available: https://www.geeksforgeeks.org/
[4]. H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep python-peak-signal-to-noise-ratio- psnr/
learning based semantic communications: An initial
investigation,” in GLOBECOM 2020-2020 IEEE
Global Communications Conference, pp. 1–6, IEEE,
2020.

IJISRT24JAN1828 www.ijisrt.com 1723

You might also like