PROBABLY PRIVATE

A newsletter on data privacy by Katharine Jarmul

About

Probably Private is a newsletter for privacy and data science enthusiasts. Whether you are here to learn about AI, machine learning and data science via the lens of privacy, or the other way around, this place is an open conversation on technical and social aspects of privacy and their intersections with surveillance, law, technology, mathematics and probability.

Past Issues

  • Claude Code Conspiracies, Privacy Routing and Last Call for Practical AI Privacy
    I'm sharing some of my initial research on the privacy and security challenges I've discovered using Claude Code. In addition, you'll learn about privacy routing and a final call for joining Cohort 1 of Practical AI Privacy.
  • Practical AI Privacy Course, Agent Investigations and Local LLMs
    This newsletter introduces my new online masterclass Practical AI Privacy, surfaces some questions around privacy and security in agent-based workflows and shares materials on getting started with local LLMs.
  • Measuring Privacy in Deep Learning
    In this issue, you'll explore how to measure privacy as part of your deep learning training. I also share materials on getting started with your own deep learning at home (Local AI) and some thoughts on what sovereign AI could mean if we focus on privacy, human rights and thinking differently.
  • The Harder Parts of Differential Privacy in Today's AI
    In this issue, you'll dive into harder questions when it comes to applying differential privacy to today's AI systems. I also share courses for learning new things in the new year and questions around Sovereign AI.
  • Differential Privacy in Deep Learning and AI
    In this newsletter, we'll dive into differential privacy in deep learning as a potential solution to the memorization problem. I also offer some advice on quickstarting your security strategy for AI use at your organization.