苹果2026年隐私保护机器学习与AI研讨会:聚焦三大核心领域
2026/05/08 08:00阅读量 26
苹果于2026年初举办为期两天的隐私保护机器学习与AI研讨会,聚焦私人学习与统计、基础模型与隐私、攻击与安全三大领域。会议讨论了联邦学习、统计学习、信任模型、隐私核算及基础模型挑战等前沿议题,并发布了多篇相关论文及特邀演讲录像。
事件概述
苹果于2026年初举办了为期两天的隐私保护机器学习与AI研讨会,汇聚苹果及学术界的隐私领域研究者,共同探讨隐私保护ML/AI的最新进展。会议聚焦三大核心领域:私人学习与统计、基础模型与隐私、攻击与安全。
核心讨论内容
- 私人学习与统计:涉及联邦学习、统计学习、差分隐私核算等理论框架与实际应用的衔接。
- 基础模型与隐私:探讨基础模型带来的独特隐私挑战,如记忆化问题、提示学习中的隐私保护等。
- 攻击与安全:分析各类攻击模型、信任假设及防御机制,强调严格的安全评估。
特邀演讲
- Crypto for DP and DP for Crypto — Kunal Talwar(苹果)
- Online Matrix Factorization and Online Query Release — Aleksandar Nikolov(多伦多大学)
- Learning from the People: Communicating about S&P Technology for Responsible Data Collection — Elissa Redmiles(乔治城大学)
- Understanding and Mitigating Memorization in Foundation Models — Franziska Boenisch(CISPA)
研讨会展示的论文(部分)
以下论文在研讨会期间展示,涵盖差分隐私、联邦学习、模型记忆化、同态加密等方向:
- [Adaptive Methods Are Preferable in High Privacy Settings: An SDE Perspective](https://openreview.net/forum?id=hSpA4DAoMk%29
- [Captured by Captions: On Memorization and its Mitigation in Clip Models](https://openreview.net/pdf?id=5V0f8igznO%29
- Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem
- Concurrent Composition for Differentially Private Continual Mechanisms
- Contextual Agent Security: A Policy for Every Purpose
- Cram Less to Fit More: Training Data Pruning Improves Fact Memorization
- Demystifying Foreground-Background Memorization in Diffusion Models
- Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs
- Efficient privacy loss accounting for subsampling and random allocation
- Eyes Off My Data: Exploring Differentially Private Federated Statistics To Support Algorithmic Bias Assessments Across Demographic Groups
- Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
- Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
- [Memorization in Self-Supervised Learning Improves Downstream Generalization](https://openreview.net/pdf?id=KSjPaXtxP8%29
- Memory-Efficient Backpropagation for Fine-Tuning LLMs on Resource-Constrained Mobile Devices
- Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives
- Piquantε: Private Quantile Estimation in the Two-Server Model
- Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning
- Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data
- Terrarium: Revisiting the Blackboard for Multi-Agent Safety, Privacy, and Security Studies
- The Importance of Being Discrete: Measuring the Impact of Discretization in End-to-End Differentially Private Synthetic Data
- The Inadequacy of Similarity-based Privacy Metrics: Privacy Attacks against “Truly Anonymous” Synthetic Datasets
- Trade-offs in Data Memorization via Strong Data Processing Inequalities
致谢
研讨会组织者包括Vitaly Feldman、Christina Ilvento、Tatsuki Koga、Audra McMillan、Congzheng Song、Kunal Talwar、Andreas Thoma和Jiayuan Ye。
