ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Genetic Algorithm based Weights Optimization of Artificial Neural Network

Ms.Dharmistha D.Vishwakarma
Research Scholar, Dept. of Electrical. Engineering, Faculty of Tech. & Engg., M.S.University of Baroda. Vadodar, Gujarat,India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

To develop an accurate process model using Artificial Neural Network (ANN), the learning process or training and validation are among the important steps. In the training process, a set of input-output patterns is repeated to the ANN. From that, weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. Through these activities, the ANN learns the correct input-output response behaviour. For validation, the ANN is subjected to input patterns unseen during training, and introduces adjustment to make the system more reliable and robust. It is also used to determine the stopping point before over fitting occurs. A typical fitting criterion may be introduced to emphasis the model validity. Such criterion may be mean square error (MSE), sum square error (SSE) which is calculated between the target and the network output. Research on using genetic algorithms for neural networks learning is increasing. Paper presented, genetic algorithm used for the weights optimization on a pre-specified neural network applied to decide the value of hello interval of the Ad hoc On Demand Distance Vector (AODV) routing protocol of the Mobile Ad-Hoc Network (MANET).

Keywords

Artificial Neural Network, Genetic Algorithm, MANET, AODV

INTRODUCTION

In the military applications, law enforcement and the rescue operations are the applications where a data network is required in places where there is no fixed networking infrastructure and no time to create such an infrastructure. The network which works without any infrastructure using wireless nodes in an emergency application is known as Mobile Ad Hoc Network (MANET) [1]. In such network the topology of the network is changing dynamically according to the mobility of the mobile nodes. In this situation the routing protocols plays an important role. The reactive routing protocol AODV is mostly used in MANET. In this paper the hello interval used in AODV [2] is determined by the ANN.
Artificial Neural Networks have been shown to have the potential to perform well for classification problems in many different environments, including business, science and engineering. The majority of the studies rely on a gradient algorithm, typically a variation of backpropagation, to obtain the weights of the model. Although, the limitations of gradient search techniques applied to complex nonlinear optimization problems, such as the artificial neural network, are well known, many researchers still choose to use these methods for network optimization [3].This ANN is trained using genetic algorithm by adjusting its weights and biases in each layer. This paper organized the content of Artificial Neural Network fundamental in section II, Section II describes the Genetic algorithm, section IV gives the survey of the papers those works on ANN and GA, and Section V describes simulation results and setup and it is finally concluded in the Section VI.

ARTIFICIAL NEURAL NETWORK

Artificial Neural Networks (ANNs) are non-linear mapping structures based on the function of the human brain. They are powerful tools for modeling, especially for the underlying data relationship is unknown. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. An ANN is configured for a specific application, through a learning process. Learning in biological system involves adjustment to the synaptic connections that exist between the neurons.
ANNs can identify and learn correlated patterns between input data sets and corresponding target values. After training ANNs can be used to predict input data. ANNs imitate the learning process of the human brain and can process problems involving non-linear and complex data even if the data are imprecise and noisy [4]. Thus they are ideally suited for the modeling of complex and often non-linear data. ANNs has great capacity in predictive modeling i.e. all the characters describing the unknown situation can be presented to the trained ANNs, and then prediction of systems is guaranteed.
ANN is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem as in Figure 1. An artificial neuron is a device with many inputs and one output. Each connection has a weight factor and these weights are adjusted in a training process. There are many types of neural networks for various applications viz. Back Propagation, Feed Forward, Multilayered Perceptron. A commonly used is multilayered perceptrons (MLP). The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not. A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained.
image
Each interconnection in an ANN has a strength that is expressed by a number referred to as weight. This is accomplished by adjusting the weights of given interconnection according to some learning algorithm. Learning methods in neural networks can be broadly classified into three basic types
• Supervised learning
• Unsupervised learning
• Reinforced learning
In MLP, the supervised learning will be used for adjusting the weights. The graphic representation of this learning is given in Figure 2.Once the ANN is trained properly then it can be used to take any decision in the application.
image
Neural networks are adjusted, or trained, so that a particular input leads to a specific target output. The ANN weights are adjusted based on a comparison of the output and the target, until the network output matches the target. Typically, many such input/target pairs are needed to train a network.

GENETIC ALGORITHM

Genetic algorithms are stochastic search techniques that guide a population of solutions towards an optimum using the principles of evolution and natural genetics. In recent years, genetic algorithms have become a popular optimization tool for many areas of research, including the field of system control, control design, science and engineering. Significant research exists concerning genetic algorithms for control design and off-line controller analysis.
Genetic algorithms are inspired by the evolution of populations. In a particular environment, individuals which better fit the environment will be able to survive and hand down their chromosomes to their descendants, while less fit individuals will become extinct. The aim of genetic algorithms is to use simple representations to encode complex structures and simple operations to improve these structures. Genetic algorithms therefore are characterized by their representation and operators. In the original genetic algorithm an individual chromosome is represented by a binary string. The bits of each string are called genes and their varying values alleles. A group of individual chromosomes are called a population. Basic genetic operators include reproduction, crossover and mutation [5]. Genetic algorithms are especially capable of handling problems in which the objective function is discontinuous or non differentiable, nonconvex, multimodal or noisy. Since the algorithms operate on a population instead of a single point in the search space, they climb many peaks in parallel and therefore reduce the probability of finding local minima.
Genetic algorithms and artificial neural networks are both techniques for learning and optimization, which have been adopted from biological systems. Neural networks use inductive learning and in general require examples, while GAs uses deductive learning and require objective evaluation function. A synergism between the two techniques has been recognized which can be applied to enhance each technique performance in what may be referred to as evolutionary neural networks. An area that has attracted the most interest is the use of GAs as an alternative learning technique in place of gradient descent methods, such as, error backpropagation. Supervised learning algorithms suffer from the possibility of getting trapped on suboptimal solutions. GAs enables the learning process to escape from entrapment in local minima in instances where the backpropagation algorithm converges prematurely. Furthermore, because GA does not function in the task domain they may be used for weight learning in recurrent networks where suitable training algorithms are still a problem. Studies have been attempted to take advantage of both techniques. Algorithms, which combine GAs and error backpropagation, have been shown to exhibit better convergence properties than the pure backpropagation. The GA is used to rapidly locate the region of optimal performance and then gradient descent backpropagation can be applied in this region. GAs have also been studied as generalized structure/parameter learning in neural systems. This type of learning combines as complimentary tools both inductive learning through synaptic weight adjustment and deductive learning through the modification of network topology to obtain automatic adaptation of system knowledge of the domain environment. Such hybrid systems are capable of finding both the weights and the architecture of a neural network, including number of layers, the processing elements per layer and the connectivity between processing elements. In summary, GAs has been used in the area of neural networks for three main tasks: -Training the weights of the connections, designing the structure of the network and finding an optimal learning rule.
A sequence of input signals is fed to both plant and neural network and the output signals from both are compared. The absolute difference is computed, and the sum of all errors for the whole sequence is used as a measure of fitness for the particular network under consideration shown in Figure 3. Genetic operators can be applied to create a new population.
image

STATE OF ART

Marwan et. al [6] describe a genetic algorithm to train the weights and structure of neural networks, assuming that the structure of the network has been decided. Structure including, the number of layers, the type and number of neurons, the pattern of connections, the permissible ranges of trainable connection weights, and the values of constant connection weights, if any. The initial set of solutions is produced by a random number generator. Each solution in the population is a string comprising n elements, where n is the number of trainable connections. They use binary encoding, where each element is 16 bits long and holds the value of a trainable connection. They found that 16 bits gave adequate resolution for both feed forward and feedback connections. From the point of view of the GA, all connection weights are handled in the same way, i.e. training of feedback connections is carried out identically to training the feedforward connection, (different from the backpropagation algorithm). [7] Also use genetic algorithms for the training of recurrent neural networks, pointing out that when the backpropagation gradient descent algorithm is applied to recurrent neural networks; it is more complicated than for feedforward networks due to the many attractors in the state space. It also investigated the use of genetic algorithms for automated selection of parameters in an ad hoc networking system. It provides experimental results demonstrating that the genetic algorithm can optimize for different classes of operating conditions. In it also compare genetic algorithm optimization against hand-tuning in a complex, realistic scenario and show how the genetic algorithm provides better performance.

SIMULATION SETUP & RESULTS

The simulation results were carried out using Qualnet 5simulator and MATLAB to evaluate the performance of the AODV routing protocol on the MANET. MANET with the 50 nodes was created in Qualnet 5 simulator. The simulation results obtained using hybrid simulation in MATLAB and QUALNET [2] using different performance metrics used for wireless ad hoc network to evaluate the performance of routing protocol. The Genetic algorithm used for training of ANN which decides the Hello interval for AODV routing protocol. Figure 4 shows the comparison results for the ANN trained without GA and GA based ANN. From the figure it can be observed that the difference between the traditional trained ANN and the GA based ANN is very much minor which is in less than approximately 0.2 which is shown in Figure 5.
image
image

CONCLUSION & FUTURE WORK

In this paper we address the problem ANN based hello interval using a genetic algorithm. The weights in different layers of the network are optimized using a genetic algorithm. The weight and biased are trained satisfactorily compared to the traditional ANN. The relative difference of the traditional ANN and GA-ANN is also satisfactorily. The work can be extended to do optimization of training and performance of ANN can also be achieved using the Genetic algorithm. This can be possible by deciding the number internal layers, biased, weights and the number of nodes in each layer using genetic algorithm.

ACKNOWLEDGMENT

The project work was carried out at Dept. of Elect. Engg, Faculty of Tech. & Engg. M.S.University of Baroda, Vadodara, Guj, India. Authors are thankful to Department of Elect. Engg. For technical help in doing this work.

References

  1. Ms. Dharmistha D Vishwakarma, Prof. Satish K Shah “Performance Optimization of Reactive Routing protocol Using Fuzzy Logic for MANET in Qualnet” in Proceeding of International conference The 2010 World Congress in Computer Science, Computer Engineering, and Applied Computing WORLDCOMP'10, USA in Wireless Networks Division. ICWN’10: Page:76-80, July 12-15, 2010.
  2. Prof. Satish K Shah, Ms.Dharmistha D. Vishwakarma, “Development and Simulation of Artificial Neural Network based decision on parametric values for Performance Optimization of Reactive Routing Protocol for MANET using Qualnet” in Proceeding of International conference “Computational Intelligence And Communication Networks CICN 2010” as well as on IEEE website pp :167-171, 26-28 Nov 2010
  3. K. Okyay, Springer, “Artificial Neural Networks and Neural Information”, ISBN 3540404082, 2003,
  4. Basheer , I.A. and Hajmeer M. ,Artificial Neural Networks: Fundamentals, Computing, Design, and Application. Journal of Microbiological Methods. 43: 3–31,2000.
  5. D. Whitley, "Applying Genetic Algorithms to Neural Network Problems," International Neural Network Society p. 230 ,1988.
  6. Marwan. A. Ali, Mat Sakim. H. A, Rosmiwati Mohd-Mokhtar, “Structure Optimization of Neural Controller Using Genetic Algorithm Technique”in European Journal of Scientific Research ISSN 1450-216X Vol.38 No.2 ,pp.248-271,2009.
  7. D. Satyanarayana,K. Kamarajan, and M. Rajappan, Genetic Algorithm Optimized Neural Networks Ensemble for Estimation of Mefenamic Acid and Paracetamol in Tablets, Genetic Algorithm Optimized Neural Networks Ensemble, Acta Chim. SlovVolume 52, pp. 440–449. 2005.