NEURAL CELLS IN ARTIFICIAL INTELLIGENCE

Authors

  • Jamolova Gulbanbegim Muzaffar qizi Tashkent University of Information Technologies named after Muhammad al-Khwarizmi /fax (0375)228-02-32 Author
  • Sayfullayev G'ayrat Serofiddin o'g'li Tashkent University of Information Technologies named after Muhammad al-Khwarizmi Author

Keywords:

artificial neuron, artificial intelligence, machine learning, artificial superintelligence, intellectual activity.

Abstract

This article provides information about the general classification and structure of artificial neural networks and the tasks they solve. The areas where artificial neural networks and their applications are used were also discussed.

Downloads

Download data is not yet available.

References

Jordan, J. Intro to optimization in deep learning: Gradient Descent/ J. Jordan // Paper-space. Series: Optimization. – 2018. – URL: https://blog.paperspace.com/intro-to-optimiza-tion-indeep-learning-gradient-descent/

Scikit-learn – машинное обучение на Python. – URL: http://scikit-learn.org/stable/ modules/generated/sklearn.neural_network. MLPClassifier.html

Keras documentation: optimizers. – URL: https://keras.io/optimizers

Ruder, S. An overview of gradient descent optimization algorithms / S. Ruder // Cornell University Library. – 2016. – URL: https://arxiv. org/abs/1609.04747

Robbins, H. A stochastic approximation method / H. Robbins, S. Monro // The annals of mathematical statistics. – 1951. – Vol. 22. – P. 400–407.

Kukar, M. Cost-Sensitive Learning with Neural Networks / M. Kukar, I. Kononenko // Machine Learning and Data Mining : proceed-ings of the 13th European Conference on Artificial Intelligence. – 1998. – P. 445–449.

Duchi, J. Adaptive Subgradient Methods for Online Learning and Stochastic Optimiza-tion / J. Duchi, E. Hazan, Y. Singer // The Jour-nal of Machine Learning Research. – 2011. – Vol. 12. – P. 2121–2159. 8. Zeiler, M. D. ADADELTA: An Adap-tive Learning Rate Method / Cornell Univer-sity Library. – 2012. – URL: https://arxiv.org/ abs/1212.5701

Kingma, D. P. Adam: A Method for Sto-chastic Optimization / D. P. Kingma, J. Ba // Cornell University Library. – 2014. – URL: https:// arxiv.org/abs/1412.6980

Гудфеллоу, Я. Глубокое обучение / Я. Гу-дфеллоу, И. Бенджио, А. Курвилль. – М. : ДМК Пресс, 2018. – 652 с.

Fletcher, R. Practical methods of optimi-zation / R. Fletcher. – Wiley, 2000. – 450 p.

Schraudolph, N. N. A Stochastic Qua-si-Newton Method for Online Convex Optimiza-tion / N.N. Schraudolph, J. Yu, S. Gunter // Sta-tistical Machine Learning. – 2017. – URL: http:// proceedings.mlr.press/v2/schraudolph07a/ schraudolph07a.pdf

Ruder, S. Optimization for Deep Learn-ing Highlights in 2017 / S. Ruder // Optimization for Deep Learning Highlights in 2017. – 2017. – URL: http://ruder.io/deep-learning-optimization-2017.

Downloads

Published

— Updated on 2024-07-02

Versions