Differential Privacy (DP) is one of the most widely adopted formal model of privacy protection but its semantics, especially in the presence of correlated data and in the adversarial interactive setting, is still not broadly understood among data science practitioners. In this paper, we first look at how DP originated from research on database-reconstruction attacks and provide some practical considerations on its use for protecting databases with sensitive information. We then investigate how the semantics of DP can be defined and understood in two ways, first using a Bayesian framework that formalises how a pair of secrets remain indistinguishable to an adversary before and after an observation, and then using privacy games familiar from cryptography. These formalisations allow us to pin down exactly what notions of privacy DP mechanisms can and cannot protect, and draw out the underlying assumptions that need to be verified in real-world applications. Finally, we look at how different forms of linkage attacks, which underlie almost all published privacy attacks in the literature, can be blunted using DP.
The full paper is here.