Publications

(2025). Learning Constant-Depth Circuits in Malicious Noise Models. COLT 2025.

PDF Cite URL

(2025). The Power of Iterative Filtering for Supervised Learning with (Heavy) Contamination. Under review.

PDF Cite URL

(2025). Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees. ICLR 2025.

PDF Cite URL

(2025). Testing Noise Assumptions of Learning Algorithms. Under review.

PDF Cite URL

(2024). Learning Noisy Halfspaces with a Margin: Massart is No Harder than Random. NeurIPS 2024 Spotlight.

PDF Cite URL

(2024). Tolerant Algorithms for Learning with Arbitrary Covariate Shift. NeurIPS 2024 Spotlight.

PDF Cite URL

(2024). Efficient Discrepancy Testing for Learning with Distribution Shift. NeurIPS 2024.

PDF Cite URL

(2024). Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension. COLT 2024 Best Paper.

PDF Cite URL

(2024). Learning Intersections of Halfspaces with Distribution Shift: Improved Algorithms and SQ Lower Bounds. COLT 2024.

PDF Cite URL

(2024). Testable Learning with Distribution Shift. COLT 2024.

PDF Cite URL

(2024). An Efficient Tester-Learner for Halfspaces. ICLR 2024.

PDF Cite URL

(2023). Tester-Learners for Halfspaces: Universal Algorithms. NeurIPS 2023 Oral.

PDF Cite URL

(2023). Agnostically Learning Single-Index Models using Omnipredictors. NeurIPS 2023.

PDF Cite URL

(2022). Learning and Covering Sums of Independent Random Variables with Unbounded Support. NeurIPS 2022 Oral.

PDF Cite URL

(2021). Aggregating Incomplete and Noisy Rankings. AISTATS 2021.

PDF Cite URL