PROBABLY PRIVATE

A newsletter on data privacy by Katharine Jarmul

About

Probably Private is a newsletter for privacy and data science enthusiasts. Whether you are here to learn about AI, machine learning and data science via the lens of privacy, or the other way around, this place is an open conversation on technical and social aspects of privacy and their intersections with surveillance, law, technology, mathematics and probability.

Past Issues

  • Attacks on Machine Unlearning and new Red Teaming Course
    This issue uncovers attacks on machine unlearning by exposing information when comparing learned and unlearned models. It also outlines the new Probably Private YouTube Minicourse on Red Teaming AI/ML systems and upcoming masterclasses.
  • How does machine unlearning work?
    In this issue, you'll explore today's machine unlearning approaches and the challenges practitioners face actually implementing unlearning.
  • Machine Unlearning: What is it?
    In this issue, you'll investigate different definitions of machine unlearning and what you can learn from studying information theory and machine forgetting research. I share a new interview with Tariq Yusuf on "What is privacy engineering?" and a critique on ever-expanding lists of AI risk taxonomies.
  • Guardrails: What are they? Can they help with privacy issues?
    In this issue, you'll learn about the different approaches companies take for guardrails and see what privacy problems they address and which ones they don't.
  • Privacy attacks on AI/ML systems
    In this issue, you'll learn about the two most common privacy attacks on ML/AI systems: Membership Inference Attacks and Data Exfiltration or Reconstruction.