PROBABLY PRIVATE

A newsletter on data privacy by Katharine Jarmul

About

Probably Private is a newsletter for privacy and data science enthusiasts. Whether you are here to learn about data science via the lens of privacy, or the other way around, this newsletter aims to be an open conversation on technical and social aspects of privacy and its intersections with surveillance, law, technology, mathematics and probability.

Past Issues

  • Memorization in Machine Learning and Multidisciplinary Practices
    Does memorization in machine learning happen, and if so, how? In this issue, I reveal the initial thoughts on an article series on how, why, when and exactly what happens when deep learning models memorize their training data. In addition, I share some thoughts post-sabbatical on next steps and observations on multidisciplinary settings for privacy work.
  • AI Act, Biden's Executive Order on AI, Data Privacy in der Praxis
    This newsletter covers recent legislation changes, including the EU's AI Act, Data Governance Act, Data Act and Biden's Executive Order on AI. In addition, Practical Data Privacy is being released in German with updated sections on recommended risk analysis for LLMs and how LLMs and other large models memorize their training data.
  • Meta's Mega-GDPR-Fine and Data Sovereignty, Ethical E-Commerce and Empowerment
    This issue covers the 1.2B€/$1.3B fine handed down by European authorities to Meta, as well as why this fine could change the way data flows in our world. I also share my recent experience attending the ethical e-commerce summit, which provided inspiration on how to empower small- and medium-sized businesses to approach sustainability, ethical data use and built-in privacy.
  • ChatGPT & GDPR Showdown, AINow Report and Privacy, Privilege & Fraud
    In this issue, you'll learn about the latest updates between GDPR authorities and ChatGPT — including my own experience sending a deletion request. I'll share some highlights from AINow's newest report on power & Big Tech. Following the UK's pension authority's Tweet featuring non-consensual videos of people being taken into custody for alleged "fraud", I'll present the links between privacy, privilege and fraud detection.
  • All your text are belong to ChatGPT
    In this issue, I dive into known privacy and security problems with ChatGPT, including a recent privacy and security incident, an analysis of potential security problems with ChatGPT plugins and musings on how closed training datasets hide unknown risks and ethical implications.