FANT: Flexible Attention-Shifting Network Telemetry
As data center networks grow in scale and complexity, the active inband network telemetry (AINT) system collects a broader range of network status metrics to provide comprehensive visibility for AINT-related network applications, but it also leads to higher measurement costs. To address this issue,...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/2/892 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As data center networks grow in scale and complexity, the active inband network telemetry (AINT) system collects a broader range of network status metrics to provide comprehensive visibility for AINT-related network applications, but it also leads to higher measurement costs. To address this issue, we introduce the Flexible Attention-shifting Network Telemetry (FANT), which dynamically focuses on critical links in each measurement cycle. Specifically, FANT employs a metric categorization strategy that divides all measurement metrics into two categories: basic measurements, which are lightweight but cover fewer metrics, and detailed measurements, which are comprehensive but incur higher overhead. Based on the analysis of the previous cycle’s measurements, FANT identifies which links are suspicious and then activates certain probe traces through an attention-shifting mechanism to collect detailed measurements of these links in the current cycle. To further save bandwidth, we model the attention-shifting process and apply heuristic algorithms to solve it. Our experiments show that FANT effectively supports the operation of ANT network applications. In a fat-tree topology with 30 pods, FANT significantly reduces bandwidth usage to 42.6% of the state-of-the-art solution. For scenarios requiring rapid computation, FANT can accelerate algorithm execution <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mn>10</mn><mn>5</mn></msup></semantics></math></inline-formula>× by setting acceleration factors, with only a 6.4% performance loss. |
---|---|
ISSN: | 2076-3417 |