The foundational neural network models—Perceptron, ADALINE, Hopfield Networks, and Boltzmann Machines—have played a critical role in shaping the development of artificial intelligence. This project aims to revisit these classic architectures, implementing them using modern tools and evaluating their performance on larger and more complex datasets. By doing so, we gain a deeper understanding of their practical functionality, inherent limitations, and the ways in which they influenced the development of contemporary neural networks.
The foundational papers on the Perceptron [55†source], Hopfield Networks [54†source], and Boltzmann Machines [53†source] introduced theoretical models but lacked reproducible code and practical implementations due to computational constraints of their time. While modern efforts have revisited these models, comprehensive implementations with large-scale datasets remain limited. This project bridges the gap by reimplementing these classic models comparing them with new models using contemporary tools such as NumPy for numerical computations, Pandas for dataset management, Matplotlib for visualization, and Scikit-learn for data preprocessing and performance evaluation.
1] Perceptrón by Frank Rosenblatt (1928 - 1969)
2] Probabilistic neural network and the polynomial ADALINE techniques.
3] Neural networks and physical systems with emergent collective computationalabilities - J.J.HOPFIELD
4] A LearningAlgorithm for Boltzmann Machines by DAVID H. ACKLEY, GEOFFREY E. HINTON, TERRENCE J. SEJNOWSKi