On the evolution of recurrent neural systems

The evolution of neural network architectures, first of the recurrent type and then with the use of attention technology, is considered. It shows how the approaches changed and how the developers’ experience was enriched. It is important that the neural networks themselves learn to understand the de...

Full description

Saved in:
Bibliographic Details
Main Authors: Gennadii Abramov, Ivan Gushchin, Tetiana Sirenka
Format: Article
Language:Ukrainian
Published: Igor Sikorsky Kyiv Polytechnic Institute 2024-12-01
Series:Sistemnì Doslìdženâ ta Informacìjnì Tehnologìï
Subjects:
Online Access:http://journal.iasa.kpi.ua/article/view/322523
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The evolution of neural network architectures, first of the recurrent type and then with the use of attention technology, is considered. It shows how the approaches changed and how the developers’ experience was enriched. It is important that the neural networks themselves learn to understand the developers’ intentions and actually correct errors and flaws in technologies and architectures. Using new active elements instead of neurons expanded the scope of connectionist networks. It led to the emergence of new structures — Kolmogorov–Arnold Networks (KANs), which may become serious competitors to networks with artificial neurons.
ISSN:1681-6048
2308-8893