Google’s recent blog post, ‘How Privacy Sandbox raises the bar for ad privacy’ once again attempts to justify its anti-competitive Privacy Sandbox platform based on alleged improvements to consumer “privacy.”
However, on close reading Google’s statements only highlight the hypocrisy of Google’s own activities.
Identity and identifiers
Google contends that most digital businesses currently rely on identifiers for seamless exchanges of data across apps and websites.
The issue with Google’s latest claims lies in its clear distinction between the ideas of ‘identifiers’ and ‘identity’, as in this quote:
“The Privacy Sandbox’s privacy-preserving APIs shield the identity of the user and restrict the amount of available data, while enabling key advertising use cases….Most digital ads today rely on exchanging user identifiers between parties, which allows for easy re-identification of a user across different apps and websites.“
When talking about ‘identifiers’ in this situation, Google is referring to a cookie file storing an interoperable match key, a string of alphanumeric characters that does not identify an individual person. Google then states that the “identity” of a person is theoretically possible to be revealed only from these identifiers.
But here’s the catch. Google is advocating for the removal of rivals’ identifiers (cookies) under the pretext of theoretical possibility that they could be used to identify individuals. However, this stance diverges from recent precedents that accurately recognises that such interoperable match keys are not inherently harmful. These decisions recognise that the real threat lies in the misuse of technology, not the technology itself – it is therefore excessive to block that technology simply based on the potential that it could be misused.
Defining Risk and Responsibility
Recent case law such as SRB vs EDBS and authorities like the UK’s ICO have explicitly stated that risk must be evaluated in relation to which organization possess the data. The same data that is personal and identifiable in one organization’s hands can be anonymous in another’s. Thus the blanket statement that calls for the elimination of all interoperable match keys – whether stored in cookies or MAIDs or elsewhere — is really just a thinly veiled attempt by Google to substitute its own ad solutions for the competitive market of other business-facing solution providers.
Whilst it is true that an identifier such as a cookie could be used by a bad actor to reidentify an individual, such an actor must already possess the individual’s identity as the random identifiers stored in cookie files on their own are not linked to any specific individual. Moreover, data protection regulations explicitly state, and courts have similarly held, that organizational measures, such as internal data handling policies, privacy policies and contracts, are important factors to determine whether reidentification is possible through “legal means.”
A clear parallel would be a pair of scissors. It is possible that scissors can be used to cause someone harm or to cut a piece of paper. However, causing someone harm with a pair of scissors is in itself illegal and so we don’t make the possession or use of scissors for non-harm-causing purposes an offence.
As a result, the whole basis of Google’s Privacy Sandbox project is undermined. It was introduced as a means of enhancing user privacy by removing control over interoperable match keys from most businesses other than Google. However, if – as the above suggests – these match keys are used by organizations with appropriate safeguards to mitigate any privacy issues then the logic of the project fails.
In fact, it reveals the sandbox for what it truly is – an attempt by a monopolist to remove safe and interoperable technologies (e.g., the cookie) and replace it with a system that is owned and controlled by the company itself.
Identifying Google’s Risky Conduct
What adds a layer of irony is Google’s own use of information that is very much personally identifiable.
Despite touting its proprietary Sandbox as a privacy-enhancing solution, Google gathers reams of personal data on each and every one of its users, sourced through an opt-in process that it’s hard for most web users to avoid. Moreover, Google admitted to the Australian Competition and Consumer Authority that it routinely uses its identifiers to reidentify individual users. Google’s ‘Customer Match’ technology blatantly uses individual user’s identities to transfer information across marketers’ and Google’s properties.
The question arises, if Google wants to protect individual’s privacy, when will it stop using identity-linked data which is universally recognized as posing far more risk to individuals than the random identifiers that it is aiming to block?
Why is it alright for Google to have and use interoperable matching technology when it wants to withdraw this right from rivals, which is required for business-facing competition across digital markets?
Google has – along with the other platforms – been subject to a number of lawsuits associated with its misuse of both consumer information and rivals’ proprietary data so why should we trust it more than anyone else?
A Critical Reassessment of Google’s Privacy Sandbox
Google’s latest blog post shows that the Sandbox initiative is based on a flawed premise, that rivals’ anonymised identifiers are a greater privacy risk than Google’s use of identity-linked data for advertising. Furthermore it highlights Google’s own hypocrisy in blocking rivals’ use of interoperable technologies and open standards that it regularly uses itself. Offering its own proprietary APIs as a replacement is not a viable substitute. With mounting awareness, the tide is turning on Sandbox. Hopefully Google’s self-serving contradictions will serve as a catalyst to drive that momentum.
Intrigued by Google’s Privacy Sandbox discourse? Explore how the platform’s claims and actions don’t always align. Contact MOW to learn more.