A hyper-heuristic based reinforcement-learning algorithm to train feedforward neural networks

Yükleniyor...
Küçük Resim

Tarih

2022

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Elsevier - Division Reed Elsevier India Pvt Ltd

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

Artificial Neural Networks (ANNs) offer unique opportunities in numerous research fields. Due to their remarkable generalization capabilities, they have grabbed attention in solving challenging problems such as classification, function approximation, pattern recognition and image processing that can be quite complex to model mathematically in practice. One of the most vital issues regarding the ANNs is the training process. The aim at this stage is to find the optimum values of ANN parameters such as weights and biases, which indeed embed the whole information of the network. Traditional gradient-descent -based training methods include various algorithms, of which the backpropagation is one of the best-known. Such methods have been shown to exhibit outstanding results, however, they are known have two major theoretical and computational limitations, which are slow convergence speed and possible local minima issues. For this purpose, numerous stochastic search algorithms and heuristic methods have been individually used to train ANNs. However, methods, bringing diverse features of different optimiz-ers together are still lacking in the related literature. In this regard, this paper aims to develop a training algorithm operating based on a hyper-heuristic (HH) framework, which indeed resembles reinforcement learning-based machine learning algorithm. The proposed method is used to train Feed-forward Neural Networks, which are specific forms of ANNs. The proposed HH employs individual metaheuristic algo-rithms such as Particle Swarm Optimization (PSO), Differential Evolution (DE) Algorithm and Flower Pollination Algorithm (FPA) as low-level heuristics. Based on a feedback mechanism, the proposed HH learns throughout epochs and encourages or discourages the related metaheuristic. Thus, due its stochas-tic nature, HH attempts to avoid local minima, while utilizing promising regions in search space more conveniently by increasing the probability of invoking relatively more promising heuristics during train-ing. The proposed method is tested in both function approximation and classification problems, which have been adopted from UCI machine learning repository and existing literature. According to the com-prehensive experimental study and statistically verified results, which point out significant improve-ments, the developed HH based training algorithm can achieve significantly superior results to some of the compared optimizers.(c) 2022 Karabuk University. Publishing services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Açıklama

Anahtar Kelimeler

Artificial neural networks, Machine learning, Hyper -heuristics, Particle swarm optimization, Flower pollination algorithm, Differential evolution algorithm, Particle Swarm Optimization, Genetic Algorithm, Improved Pso, Evolutionary, Metaheuristics, Intelligence

Künye