Comparative Analysis of Backpropagation and Genetic Algorithms in Neural Network Training

Main Article Content

Ayoub Hazrati
Shannon Kariuki
Ricardo Silva

Abstract

The exploration of artificial neural networks (ANNs) has seen significant advancements, yet the optimal approach for training these networks remains a topic of debate. This study investigates the efficacy and computational efficiency of two prominent optimization techniques, backpropagation and genetic algorithms (GAs), in training traditional neural networks. The research conducted three experiments: the first involved training a single-layer neural network using a simple mathematical function; the second utilized the diabetes dataset for regression analysis; and the third applied the iris dataset for multi-class classification. The networks were trained using Google Colab, leveraging generative AI tools to expedite the experimentation process. Results indicate that backpropagation consistently achieved lower mean square error (MSE) in shorter training times compared to GAs, especially in high-dimensional data. GAs demonstrated robustness in escaping local minima, making them suitable for complex, noisy datasets where backpropagation might converge prematurely. The study concludes that while backpropagation is preferable for tasks requiring precision and speed, genetic algorithms offer valuable advantages in explorative scenarios, highlighting the importance of task-specific algorithm selection in neural network training.

Downloads

Download data is not yet available.

Article Details

How to Cite
Hazrati, A. ., Shannon Kariuki, & Ricardo Silva. (2024). Comparative Analysis of Backpropagation and Genetic Algorithms in Neural Network Training. International Journal of Health Technology and Innovation, 3(03), 18–25. https://doi.org/10.60142/ijhti.v3i03.04
Section
Research Article