Two architectures of neural networks in distance approximation

In this research paper, we examine recurrent and linear neural networks to determine the relationship between the amount of data needed to achieve generalization and data dimensionality, as well as the relationship between data dimensionality and the necessary computational complexity. To achieve t...

Full description

Saved in:
Bibliographic Details
Main Authors: Wiktor Wojtyna, Jakub Sławiński, Radosław Tonga
Format: Article
Language:English
Published: Gdańsk University of Technology 2025-07-01
Series:TASK Quarterly
Subjects:
Online Access:https://journal.mostwiedzy.pl/TASKQuarterly/article/view/3416
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this research paper, we examine recurrent and linear neural networks to determine the relationship between the amount of data needed to achieve generalization and data dimensionality, as well as the relationship between data dimensionality and the necessary computational complexity. To achieve this, we also explore the optimal topologies for each network, discuss potential problems in their training, and propose solutions. In our experiments, the relation- ship between the amount of data needed to achieve generalization and data dimensionality was linear for feed-forward neural networks and exponential for recurrent ones. However, the required computational complexity appears to grow exponentially with increasing dimensionality. We also compared the networks’ accuracy in both distance approxima- tion and classification to the most popular alternative, siamese networks, which outperformed both linear and recurrent networks in classification despite having lower accuracy in exact distance approximation.
ISSN:1428-6394