Showing 1 - 20 results of 110 for search '(reduction OR education) error encoding', query time: 0.10s Refine Results
  1. 1
  2. 2

    Fully-Gated Denoising Auto-Encoder for Artifact Reduction in ECG Signals by Ahmed Shaheen, Liang Ye, Chrishni Karunaratne, Tapio Seppänen

    Published 2025-01-01
    “…The FGDAE showed the best performance on all seven error metrics used in our work in different noise intensities and artifact combinations compared with state-of-the-art algorithms. …”
    Get full text
    Article
  3. 3

    Variational AutoEncoder for synthetic insurance data by Charlotte Jamotton, Donatien Hainaut

    Published 2024-12-01
    “…This study introduces novel insights into utilising VAEs for unsupervised learning tasks in actuarial science, including dimension reduction and synthetic data generation. We propose a VAE model with a quantile transformation for continuous (latent) variables, a reconstruction loss that combines categorical cross-entropy and mean squared error, and a KL divergence-based regularisation term. …”
    Get full text
    Article
  4. 4
  5. 5
  6. 6

    Error Analysis of Trigonometry Students in a Technological University by Catalino L. Centillas Jr., Christian Caben M. Larisma

    Published 2016-06-01
    “…The purpose of the study is to analyze the different errors that students commit in Trigonometry. The sample consists of 24 teacher education students and 25 information technology students. …”
    Get full text
    Article
  7. 7

    A Structured AHP-Based Approach for Effective Error Diagnosis in Mathematics: Selecting Classification Models in Engineering Education by Milton Garcia Tobar, Natalia Gonzalez Alvarez, Margarita Martinez Bustamante

    Published 2025-06-01
    “…Newman’s focus on reading, comprehension, transformation, and encoding addresses the most common errors encountered in the early stages of mathematical learning. …”
    Get full text
    Article
  8. 8

    ANALYSIS OF STUDENTS’ ERRORS WITH NEWMAN'S ERROR ANALYSIS ON VIBRATION, WAVES AND SOUNDS CONCEPT by Prita Dwi Nurlaeli, Arif Widiyatmoko

    Published 2023-01-01
    “…Question number four reading error 0%, comprehension error 33.33% transformation error 19.04%, process skills error and encoding error is 0%. …”
    Get full text
    Article
  9. 9

    TYPE OF ERROR IN COMPLETING MATHEMATICAL PROBLEM BASED ON NEWMAN’S ERROR ANALYSIS (NEA) AND POLYA THEORY by Widodo Winarso, Sirojudin Wahid, Rizkiah Rizkiah

    Published 2022-01-01
    “…The study results based on Newman's Error Analysis are errors reading by 1%, error understanding by 0%, error transforms by 3%, error processing ability by 5%, and error encoding by 7%. …”
    Get full text
    Article
  10. 10

    Effects of aging on word position encoding in Chinese reading by Zhiwei Liu, Yan Li, Jingxin Wang

    Published 2025-05-01
    “…This study examined age-related differences in the flexibility of word position encoding by investigating the transposed-word effect in young and older adults. …”
    Get full text
    Article
  11. 11

    Achieving Computational Gains with Quantum Error-Correction Primitives: Generation of Long-Range Entanglement Enhanced by Error Detection by Haoran Liao, Gavin S. Hartnett, Ashish Kakkar, Adrian Tan, Michael Hush, Pranav S. Mundada, Michael J. Biercuk, Yuval Baum

    Published 2025-05-01
    “…In this paper, we demonstrate that the strategic application of QEC primitives without logical encoding can yield significant advantages on superconducting processors—relative to any alternative error-reduction strategy—while only requiring a modest overhead. …”
    Get full text
    Article
  12. 12

    Achieving Computational Gains with Quantum Error-Correction Primitives: Generation of Long-Range Entanglement Enhanced by Error Detection by Haoran Liao, Gavin S. Hartnett, Ashish Kakkar, Adrian Tan, Michael Hush, Pranav S. Mundada, Michael J. Biercuk, Yuval Baum

    Published 2025-05-01
    “…In this paper, we demonstrate that the strategic application of QEC primitives without logical encoding can yield significant advantages on superconducting processors—relative to any alternative error-reduction strategy—while only requiring a modest overhead. …”
    Get full text
    Article
  13. 13
  14. 14

    Any-to-any voice conversion using representation separation auto-encoder by Zhihua JIAN, Zixu ZHANG

    Published 2024-02-01
    “…In view of the problem that it was difficult to separate speaker personality characteristics from semantic content information in any-to-any voice conversion under non-parallel corpus, which led to unsatisfied performance, a voice conversion method, called RSAE-VC (representation separation auto-encoder voice conversion) was proposed.The speaker’s personality characteristics in the speech were regarded as time invariant and the content information as time variant, and the instance normalization and activation guidance layer were used in the encoder to separate them from each other.Then the content information of the source speech and the personality characteristics of the target one was utilized to synthesize the converted speech by the decoder.The experimental results demonstrate that RSAE-VC has an average reduction of 3.11% and 2.41% in Mel cepstral distance and root mean square error of pitch frequency respectively, and has an increasement of 5.22% in MOS and 8.45% in ABX, compared with the AGAIN-VC (activation guidance and adaptive instance normalization voice conversion) method.In RSAE-VC, self-content loss is applied to make the converted speech reserve more content information, and self-speaker loss is used to separate the speaker personality characteristics from the speech better, which ensure the speaker personality characteristics be left in the content information as little as possible, and the conversion performance is improved.…”
    Get full text
    Article
  15. 15

    Design of Radix-8 Unsigned Bit Pair Recoding Algorithm-Based Floating-Point Multiplier for Neural Network Computations by J. Jean Jenifer Nesam, S. Sankar Ganesh

    Published 2025-01-01
    “…The partial product rows are reduced from n to <inline-formula> <tex-math notation="LaTeX">$\frac {n}{4}$ </tex-math></inline-formula> for <inline-formula> <tex-math notation="LaTeX">$n\times n$ </tex-math></inline-formula> binary multiplier using the BPR algorithm with parallel processed partial product reduction. The new algorithm performs partial product row reduction without the 2&#x2019;s complement, Negative Encoding (NE), and Sign Extension (SE) are required for Booth recoded-based multiplication but these computations are not required for floating point unsigned multiplication. …”
    Get full text
    Article
  16. 16
  17. 17

    Research on multi-scenario adaptive acoustic encoders based on neural architecture search by Yiliang Wu, Yiliang Wu, Xuliang Luo, Xuliang Luo, Fengchan Guo, Tinghui Lu, Tinghui Lu, Cuimei Liu

    Published 2024-12-01
    “…The SAAE method achieves an average error rate reduction of more than 5% compared with existing acoustic encoders, highlighting its capability to deeply analyze speech features in specific scenarios and design high-performance acoustic encoders in a targeted manner.…”
    Get full text
    Article
  18. 18

    Design of intelligent English translation teaching system combined with bidirectional encoder representation by Shanshan Xu

    Published 2025-12-01
    “…With the continuous progress of artificial intelligence technology, an intelligent education system has become an essential means to improve teaching efficiency and quality. …”
    Get full text
    Article
  19. 19
  20. 20

    Visual impairment prevention by early detection of diabetic retinopathy based on stacked auto-encoder by Shagufta Almas, Fazli Wahid, Sikandar Ali, Ahmed Alkhyyat, Kamran Ullah, Jawad Khan, Youngmoon Lee

    Published 2025-01-01
    “…Unlike traditional CNN approaches, our method offers improved reliability by reducing time complexity, minimizing errors, and enhancing noise reduction. Leveraging a comprehensive dataset from KAGGLE containing 35,126 retinal fundus images representing one healthy (normal) stage and four DR stages, our proposed model demonstrates superior accuracy compared to existing deep learning algorithms. …”
    Get full text
    Article