PROBABLY PRIVATE

A newsletter on data privacy by Katharine Jarmul

About

Probably Private is a newsletter for privacy and data science enthusiasts. Whether you are here to learn about AI, machine learning and data science via the lens of privacy, or the other way around, this place is an open conversation on technical and social aspects of privacy and their intersections with surveillance, law, technology, mathematics and probability.

Past Issues

  • Measuring Privacy in Deep Learning
    In this issue, you'll explore how to measure privacy as part of your deep learning training. I also share materials on getting started with your own deep learning at home (Local AI) and some thoughts on what sovereign AI could mean if we focus on privacy, human rights and thinking differently.
  • The Harder Parts of Differential Privacy in Today's AI
    In this issue, you'll dive into harder questions when it comes to applying differential privacy to today's AI systems. I also share courses for learning new things in the new year and questions around Sovereign AI.
  • Differential Privacy in Deep Learning and AI
    In this newsletter, we'll dive into differential privacy in deep learning as a potential solution to the memorization problem. I also offer some advice on quickstarting your security strategy for AI use at your organization.
  • Attacks on Machine Unlearning and new Red Teaming Course
    This issue uncovers attacks on machine unlearning by exposing information when comparing learned and unlearned models. It also outlines the new Probably Private YouTube Minicourse on Red Teaming AI/ML systems and upcoming masterclasses.
  • How does machine unlearning work?
    In this issue, you'll explore today's machine unlearning approaches and the challenges practitioners face actually implementing unlearning.