Distributed Estimation for <i>ℓ</i><sub>0</sub>-Constrained Quantile Regression Using Iterative Hard Thresholding

Distributed frameworks for statistical estimation and inference have become a critical toolkit for analyzing massive data efficiently. In this paper, we present distributed estimation for high-dimensional quantile regression with <inline-formula><math xmlns="http://www.w3.org/1998/Math...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhihe Zhao, Heng Lian
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/4/669
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Distributed frameworks for statistical estimation and inference have become a critical toolkit for analyzing massive data efficiently. In this paper, we present distributed estimation for high-dimensional quantile regression with <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo mathvariant="sans-serif-italic">ℓ</mo><mn>0</mn></msub></semantics></math></inline-formula> constraint using iterative hard thresholding (IHT). We propose a communication-efficient distributed estimator which is linearly convergent to the true parameter up to the statistical precision of the model, despite the fact that the check loss minimization problem with an <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mo mathvariant="sans-serif-italic">ℓ</mo><mn>0</mn></msub></semantics></math></inline-formula> constraint is neither strongly smooth nor convex. The distributed estimator we develop can achieve the same convergence rate as the estimator based on the whole data set under suitable assumptions. In our simulations, we illustrate the convergence of the estimators under different settings and also demonstrate the accuracy of nonzero parameter identification.
ISSN:2227-7390