PROBABLY PRIVATE

A newsletter on data privacy by Katharine Jarmul

About

Probably Private is a newsletter for privacy and data science enthusiasts. Whether you are here to learn about AI, machine learning and data science via the lens of privacy, or the other way around, this place is an open conversation on technical and social aspects of privacy and their intersections with surveillance, law, technology, mathematics and probability.

Past Issues

  • How does machine unlearning work?
    In this issue, you'll explore today's machine unlearning approaches and the challenges practitioners face actually implementing unlearning.
  • Machine Unlearning: What is it?
    In this issue, you'll investigate different definitions of machine unlearning and what you can learn from studying information theory and machine forgetting research. I share a new interview with Tariq Yusuf on "What is privacy engineering?" and a critique on ever-expanding lists of AI risk taxonomies.
  • Guardrails: What are they? Can they help with privacy issues?
    In this issue, you'll learn about the different approaches companies take for guardrails and see what privacy problems they address and which ones they don't.
  • Privacy attacks on AI/ML systems
    In this issue, you'll learn about the two most common privacy attacks on ML/AI systems: Membership Inference Attacks and Data Exfiltration or Reconstruction.
  • Common AI Product Privacy Mistakes, Masterclasses and Trainings
    In this newsletter, you'll look at some common privacy mistakes organizations make when building AI products or agent workflows. I also announce upcoming masterclasses and trainings that are available for organizations building AI products and talk about why talking about privacy is hard right now.