PROBABLY PRIVATE

A newsletter on data privacy by Katharine Jarmul

About

Probably Private is a newsletter for privacy and data science enthusiasts. Whether you are here to learn about AI, machine learning and data science via the lens of privacy, or the other way around, this place is an open conversation on technical and social aspects of privacy and their intersections with surveillance, law, technology, mathematics and probability.

Past Issues

  • Differential Privacy in Deep Learning and AI
    In this newsletter, we'll dive into differential privacy in deep learning as a potential solution to the memorization problem. I also offer some advice on quickstarting your security strategy for AI use at your organization.
  • Attacks on Machine Unlearning and new Red Teaming Course
    This issue uncovers attacks on machine unlearning by exposing information when comparing learned and unlearned models. It also outlines the new Probably Private YouTube Minicourse on Red Teaming AI/ML systems and upcoming masterclasses.
  • How does machine unlearning work?
    In this issue, you'll explore today's machine unlearning approaches and the challenges practitioners face actually implementing unlearning.
  • Machine Unlearning: What is it?
    In this issue, you'll investigate different definitions of machine unlearning and what you can learn from studying information theory and machine forgetting research. I share a new interview with Tariq Yusuf on "What is privacy engineering?" and a critique on ever-expanding lists of AI risk taxonomies.
  • Guardrails: What are they? Can they help with privacy issues?
    In this issue, you'll learn about the different approaches companies take for guardrails and see what privacy problems they address and which ones they don't.