2023, issue 3, p. 69-80
Received 08.09.2023; Revised 19.09.2023; Accepted 26.09.2023
Published 29.09.2023; First Online 19.10.2023
https://doi.org/10.34229/2707-451X.23.3.6
Previous | FULL TEXT (in Ukrainian) | Next
Review and Analysis of the Development of Artificial Neural Networks
V.M. Glushkov Institute of Cybernetics of the NAS of Ukraine, Kyiv
Correspondence: This email address is being protected from spambots. You need JavaScript enabled to view it.
Introduction. The creation of intelligent cyber-physical systems is impossible without knowledge of the analysis and process of development of scientific thought regarding artificial neural networks. The main task of this article is research and analysis of the concept of intelligent technologies based on artificial neural networks. Knowledge of the peculiarities of the creation, formation, and development of knowledge about artificial neural networks is of particular importance for scientists, developers, and design engineers. The article consists of the following parts: first, different approaches to the problem of building artificial functions of the brain are highlighted, the views of which are, in turn, divided into monotypic and genotypic models. The next part is the analysis of the development of artificial intelligence systems, some facts of the process of the development of the artificial intelligence system are also introduced and the peculiarities of scientific opinion on the issues of artificial neural networks are clarified. Various concepts and views are considered, with the help of which it is possible to reproduce the calculation process for a more detailed analysis and synthesis of algorithms of intelligent systems. In the part about the state of the theory, attention is focused on the fact that researchers who could not get accurate analytical answers add to the scientific toolkit methods of experimental modeling either on digital machines or on mechanical models. In addition, it is noted that the model is not the result of research, but only a starting point for analyzing its behavior. In the part of artificial neural networks, the author touches on the following concepts: the logic of McCulloch and Pitts calculations in neural networks, the problems of assigning confidence coefficients, the principle of self-organization, which were first illustrated with the help of computer simulations, the principle of competitive learning, the Kohonen Self-Organizing Maps, multilayer networks of direct propagation taking into account the radial basis functions, which became an alternative to the multilayer perceptron, the support vector machine. As conclusions and as a result, the author receives a complete picture of the genesis of intelligent systems and the technology of artificial neural networks. The processes of development of scientific thought give a clear understanding of the features of intellectual technologies built with the help of artificial neural networks, features of functioning and calculation.
Keywords: artificial neural networks, perceptron, theories of McCulloch-Pitts, intelligent computer systems, cyber-physical agent, mobile robot, robotics.
Cite as: Bilokon O. Review and Analysis of the Development of Artificial Neural Networks. Cybernetics and Computer Technologies. 2023. 3. P. 68–80. (in Ukrainian) https://doi.org/10.34229/2707-451X.23.3.6
References
1. Kohler W. Relational Determination in Perception. Cerebral mechanisms in behavior / ed. L. A. Jeffress. New York : Wiley, 1951. P. 200–243.
2. Bullock T.H. Neuron Doctrine and electrophysiology. Science. 1959. Vol. 129. N 3355. P. 997–1002. https://doi.org/10.1126/science.129.3355.997
3. Pitts W., McCulloch W.S. How we know universals: The Paception of auditory and visual forms. Bull. of Math. Biophys. 1947. Vol. 9. P. 127–147. https://doi.org/10.1007/BF02478291
4. Turing A.M. On Computable Numbers, with an Application to the Entscheidungs problem. Proc. London Math. Soc. 1936. Ser. 2. Vol. 42. P. 230–265; 1937. Vol. 43. P. 544–546. https://doi.org/10.1017/S002248120003958X
5. Rashevsky N. Mathematical Biophysics; Physicomathematical Foundations of Biology. Chicago : Univ. Chicago Press, 1938. 340 p.
6. McCulloch W.S., Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull. of Math. Biophys. 1943. Vol. 5. P. 115–133. https://doi.org/10.1007/BF02478259
7. Lashley K.S. Brain mechanisms and intelligence: a quantitative study of injuries to the brain. Chicago : Univ. Chicago Press, 1929. XIV, 186 p. : XI pl., ill., diagr. https://doi.org/10.1037/10017-000
8. Voas R.B. A description of the astronaut’s task in project mercury. Human Factors. 1961. Vol. 3. N 3. P. 149–165. https://doi.org/10.1177/001872086100300301
9. Licklider J.C.R. Man-computer symbiosis. IRE Trans. on Human Factors in Electronics. 1960. Vol. HFE-1. P. 4–11. https://doi.org/10.1109/THFE2.1960.4503259
10. Morrison Charles by Richard Bissell Prosser. Dictionary of National Biography. 1885–1900. Vol. 39. URL: https://en.wikisource.org/wiki/Dictionary_of_National_Biography,_1885-1900/Morrison,_Charles
11. Rail W. Some historical notes. Computational Neuroscience / ed. E. L. Schwartz. Cambridge : MIT Press, 1990. P. 3–8.
12. Minsky M.L. Theory of neural-analog reinforcement systems and its aplicationto the brain-model problem. Thesis. Diss. Princeton : Princeton Univ., 1954. 24 p.
13. Caianiello E. Outline of a theory of thought-processes and thinking machines. Journal of Theoretical Biology. 1961. Vol. 1. No. 2. P. 204–235. https://doi.org/10.1016/0022-5193(61)90046-7
14. Kohonen T. The self-organizing map. Proc. of the IEEE. 1990. Vol. 78. N. 9. P. 1464–1480. https://doi.org/10.1109/5.58325
15. Neumann J. von. Probabilistic logics and the synthesis of reliable organisms from unreliable components. Automata Studies / ed C. E. Shannon, J. Mc Carthy. Princeton ; N. Y. : Univ. Press, 1956. P. 43–98. https://doi.org/10.1515/9781400882618-003
16. Rosenblatt F. The Perceptron: A probabilistic model for information storage andorganization in the brain. Psychol. Rev. 1958. Vol. 65. N. 6. P. 386–408. https://doi.org/10.1037/h0042519
17. Rosenblatt F. On the convergence of reinforcement procedures in simple per-ceptrons. Cornell Aeronautical Laboratory Report VG-1196-G-4. Buffalo, NY. February 1960.
18. Glushkov V.M. The theory of instruction for a class of discrete perceptrons. USSR Computational Mathematics and Mathematical Physics. 1963. T. 2. No. 2. P. 338–355. https://doi.org/10.1016/0041-5553(63)90410-5
19. Glushkov V.M. On the question of self-instruction in the perceptron. USSR Computational Mathematics and Mathematical Physics. 1963. T. 2. No. 6. P. 1325–1335. https://doi.org/10.1016/0041-5553(63)90347-1
20. Ivakhnenko A.G. Heuristic self-organization system in technical cybernetics. Kyiv: Tehnika, 1971. 372 p. (in Russian)
21. Willshaw D.J., Von Der Malsburg C. How patterned neural connections can be set up by self-organization. Proc. of the Roy. Soc. of London. Ser. B: Biol. sciences. 1976. Vol. 194. N. 1117. P. 431–145. https://doi.org/10.1098/rspb.1976.0087
22. Minsky M.L. Steps towards artificial intelligence. Proc. of the Ins. of Radio Eng. 1961. Vol. 49. N. 1. P. 8–30 (Reprinted in: Computers and Thought / [ed. E. A. Feigenbaum, J. Feldman]. New York : McGraw-Hill, 1963. P. 406–450.
23. Grossberg S. Adaptive pattern classification and universal receding: I. Parallel development and coding of neural detectors. Biol. Cybernetics. 1976. Vol. 23. N. 3. P. 121–134. https://doi.org/10.1007/BF00344744
24. Grossberg S. How does a brain build a cognitive code? Psychol. Rev. 1980. Vol. 87. N. 1. P. 1–51. https://doi.org/10.1037/0033-295X.87.1.1
25. Hopfield J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. of the Nat. Acad. of Sciences. 1982. Vol. 79. N. 8. P. 2554–2558. https://doi.org/10.1073/pnas.79.8.2554
26. Cragg B.G.,Tamperley H.N.V. Memory: The analogy with ferromagnetic hysteresis. Brain. 1955. Vol. 78. Part II. P. 304–316. https://doi.org/10.1093/brain/78.2.304
27. Grossberg S. A prediction theory for some nonlinear functional-difference equa-tions. J. of Math. Analysis and Aplications. 1968. Vol. 21. Iss. 3. P. 643–694. Vol. 22. Iss. 3. P. 490–522. https://doi.org/10.1016/0022-247X(68)90269-2
28. Amari S. Characteristics of random nets o f analog neuron-like elements. in IEEE Transactions on Systems, Man, and Cybernetics. 1972. Vol. SMC-2. N. 5. P. 643–657. https://doi.org/10.1109/TSMC.1972.4309193
29. Kohonen T. Self-organized formation of topologically correct feature maps. Biol. Cybernetics. 1982. Vol. 43. N. 1. P. 59–69. https://doi.org/10.1007/BF00337288
30. Kirkpatrick S., Gelatt C.D., Vecchi Jr., Vecchi M.P. Optimization by simulated annealing. Science. New Series. 1983. Vol. 220. N. 4598. P. 671–680. https://doi.org/10.1126/science.220.4598.671
31. Aizerman M.A., Braverman E.M., Rozonoer L.I. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control. 1964. Vol. 25. P. 821–837.
32. Duda R.O., Hart P.E. Pattern Classification and Scene Analysis, New York: Wiley, 1973.
ISSN 2707-451X (Online)
ISSN 2707-4501 (Print)
Previous | FULL TEXT (in Ukrainian) | Next