Showing 21 - 40 results of 70 for search 'tensor network generalization', query time: 0.11s Refine Results
  1. 21

    A generalized higher-order correlation analysis framework for multi-omics network inference. by Weixuan Liu, Katherine A Pratte, Peter J Castaldi, Craig Hersh, Russell P Bowler, Farnoush Banaei-Kashani, Katerina J Kechris

    Published 2025-04-01
    “…In this work, we have developed a novel multi-omics network analysis pipeline called Sparse Generalized Tensor Canonical Correlation Analysis Network Inference (SGTCCA-Net) that can effectively overcome these limitations. …”
    Get full text
    Article
  2. 22
  3. 23

    The weak equivalence principle and the Dirac constant: A result from the holographic principle by Eiji Konishi

    Published 2025-06-01
    “…This result follows from an equation between the Euclidean and Lorentzian world-line actions of a massive particle divided by the Dirac constant, via the Wick rotation, by using the Euclidean and Lorentzian actions of a holographic tensor network, whose quantum state is classicalized by introducing the superselection rule.…”
    Get full text
    Article
  4. 24

    Hankel Tensor Subspace Representation for Remotely Sensed Image Fusion by Fei Ma, Qiang Qu, Feixia Yang, Guangxian Xu

    Published 2025-01-01
    “…Furthermore, exerting <inline-formula><tex-math notation="LaTeX">$\ell _{1}$</tex-math></inline-formula> norm on core tensor is conducted to promote the sparsity for generalization enhancement. …”
    Get full text
    Article
  5. 25

    Representing Born effective charges with equivariant graph convolutional neural networks by Alex Kutana, Koji Shimizu, Satoshi Watanabe, Ryoji Asahi

    Published 2025-05-01
    “…Applications to tensors of atomic Born effective charges in diverse materials including perovskite oxides, Li3PO4, and ZrO2, are demonstrated, and good performance and generalization ability is obtained.…”
    Get full text
    Article
  6. 26

    Intractable prefrontal and limbic white matter network disruption in adolescents with drug-naïve nonsuicidal self-injury by Yuwei Chen, Xiongxiong Yang, Kaike Liao, Rui Yu, Xinyue Chen, Wenjing Zhang, Nian Liu

    Published 2025-07-01
    “…Network-based statistic (NBS) correction methods were used to assess structural connectivity within this network, and a generalized linear model was used to compare network metrics between NSSI and HCs, whereas paired t-tests were used to compare the same patients pre- and post-treatment. …”
    Get full text
    Article
  7. 27

    A general two-group constants estimator for 17x17 PWR assembly configurations using artificial neural networks by Gökhan Pediz, M. Alim Kırışık

    Published 2025-06-01
    “…In this study, a preliminary general two-group constants predictor using artificial neural networks (ANNs) for pressurized water reactor (PWR) based assembly designs is established. …”
    Get full text
    Article
  8. 28

    Emerging generalization advantage of quantum-inspired machine learning in the diagnosis of hepatocellular carcinoma by Domenico Pomarico, Alfonso Monaco, Nicola Amoroso, Loredana Bellantuono, Antonio Lacalamita, Marianna La Rocca, Tommaso Maggipinto, Ester Pantaleo, Sabina Tangaro, Sebastiano Stramaglia, Roberto Bellotti

    Published 2025-03-01
    “…We consider two categories of such algorithms: parameterized quantum circuits (PQC) and tensor networks. The variational optimization of PQCs achieves better accuracy than classical counterparts on the independent test set, reaching an advantage equal to $$11\%$$ 11 % in accuracy, while tensor networks offer equivalent performance with fewer parameters.…”
    Get full text
    Article
  9. 29
  10. 30

    A Multiscale CNN-Based Intrinsic Permeability Prediction in Deformable Porous Media by Yousef Heider, Fadi Aldakheel, Wolfgang Ehlers

    Published 2025-02-01
    “…The methodology involves four steps: (1) constructing a dataset of CT images from Bentheim sandstone at varying volumetric strain levels; (2) conducting pore-scale flow simulations using the lattice Boltzmann method (LBM) to obtain permeability data; (3) training the CNN model with processed CT images as inputs and permeability tensors as outputs; and (4) employing techniques like data augmentation to enhance model generalization. …”
    Get full text
    Article
  11. 31

    Deep Learning Spinal Cord Segmentation Based on B0 Reference for Diffusion Tensor Imaging Analysis in Cervical Spondylotic Myelopathy by Shuoheng Yang, Ningbo Fei, Junpeng Li, Guangsheng Li, Yong Hu

    Published 2025-06-01
    “…Diffusion Tensor Imaging (DTI) is a crucial imaging technique for accurately assessing pathological changes in Cervical Spondylotic Myelopathy (CSM). …”
    Get full text
    Article
  12. 32
  13. 33

    Quantum networks theory by Pablo Arrighi, Amélia Durbec, Matt Wilson

    Published 2024-10-01
    “…Second, tensors and traceouts are generalized, so that systems can be partitioned according to almost arbitrary logical predicates in a robust manner. …”
    Get full text
    Article
  14. 34

    COMPARISON OF CRUSTAL DEFORMATION RATES ESTIMATED FROM SEISMIC AND GPS DATA ON THE BISHKEK GEODYNAMIC POLYGON by N. A. Sycheva, A. N. Mansurov

    Published 2017-12-01
    “…From the velocity gradient tensors we calculate the strain rate tensors (Fig. 5) and then the rate of changes of the area (meterage)  (Fig. 7). …”
    Get full text
    Article
  15. 35

    Research on Wellbore Trajectory Prediction Based on a Pi-GRU Model by Hanlin Liu, Yule Hu, Zhenkun Wu

    Published 2025-07-01
    “…To solve these problems, this study proposes a parallel input gated recurrent unit (Pi-GRU) model based on the TensorFlow framework. The GRU network captures the temporal dependencies of sequence data (such as dip angle and azimuth angle), while the BP neural network extracts deep correlations from non-sequence features (such as stratum lithology), thereby achieving multi-source data fusion modeling. …”
    Get full text
    Article
  16. 36

    Triangular Mesh Surface Subdivision Based on Graph Neural Network by Guojun Chen, Rongji Wang

    Published 2024-12-01
    “…In recent years, a nonlinear subdivision method that uses neural network methods, called neural subdivision (NS), has been proposed. …”
    Get full text
    Article
  17. 37

    Routing Algorithm for Power Communication Networks Based on Serivce Differentiated Transmission Requirements by Songping XUE, Dequan GAO, Ziyan ZHAO, Yuqian LIN, Zejing GUANG, Dawei ZHANG

    Published 2024-11-01
    “…The electric power communication network, pivotal in ensuring the stable operation of the power grid, is tasked with transmitting control instructions and collecting status data. …”
    Get full text
    Article
  18. 38

    Optimizing Artificial Neural Network Learning Using Improved Reinforcement Learning in Artificial Bee Colony Algorithm by Taninnuch Lamjiak, Booncharoen Sirinaovakul, Siriwan Kornthongnimit, Jumpol Polvichai, Aysha Sohail

    Published 2024-01-01
    “…The results indicate that when the improved R-ABC is applied to ANNs, it outperforms heuristic search optimization, especially as the network size expands. Although SGD and Adam achieved faster execution times with TensorFlow, the study suggests that using PSO and improved R-ABC can improve model accuracy and efficiency. …”
    Get full text
    Article
  19. 39

    Developmental maturation of dynamic causal control signals in higher-order cognition: a neurocognitive network model. by Kaustubh Supekar, Vinod Menon

    Published 2012-02-01
    “…Here we use a novel multimodal neurocognitive network-based approach combining task-related fMRI, resting-state fMRI and diffusion tensor imaging (DTI) to investigate the maturation of control processes underlying problem solving skills in 7-9 year-old children. …”
    Get full text
    Article
  20. 40

    Comparative Evaluation of Modified Wasserstein GAN-GP and State-of-the-Art GAN Models for Synthesizing Agricultural Weed Images in RGB and Infrared Domain by Shubham Rana, Matteo Gatti

    Published 2025-06-01
    “…This study highlights the customized model's key features: • Produces a 128 × 7 × 7 tensor, optimizes feature map size for subsequent layers, with two layers using 4 × 4 kernels and 128 and 64 filters for upsampling. • Uses 3 × 3 kernels in all convolutional layers to capture fine-grained spatial features, incorporates batch normalization for training stability, and applies dropout to reduce overfitting and improve generalization.…”
    Get full text
    Article