Opinion

Bootleggers, Baptists, and Big Data: Did GDPR gold-plating go too far?

What did the GDPR cost? What were its benefits?

A paper published in May by the National Bureau of Economic Research (NBER), titled ‘GDPR and the Lost Generation of Innovative Apps’, estimates that the General Data Protection Regulation (GDPR) introduced by the EU in May 2018 reduced consumer surplus and aggregate app usage:

· Looking at 4.1 million apps on Google Play Store, the authors of the study found that nearly a third of available apps were removed between July 2016 and October 2019.

· New entry of apps fell by 47.2 percent. You can find the full paper here.

The broad stated aim of the GDPR was to adopt a common definition of user data rights to harmonise a European “area of freedom, security, and justice”. Regulation sought to minimise the collection of personal data and certain rights were granted. These may be important consumer protections, such as the right to review / reset / correct data about oneself, handing considerable agency to the data subject. Of course, the anxiety to shore up online privacy is understandable. As the digital world encompasses an ever-greater proportion of our lives, the importance of data security only grows. 

However, to offer the data subject important rights within a system, should not give license to throw spanners into useful machines like data-driven business models, provided an individual’s right to privacy is respected. Nor should we lose sight of the fact that data exchanges can be of enormous benefit to society and the consumer – thoughtlessly minimising innocuous data collection for its own sake achieves little.

Contrary to popular belief, online data is rarely identity-linked. There are De-identified safeguards: ‘User 1’ or ‘Browser Session 1’. When such safeguards are in place to de-identify often generic humdrum everyday information, the term ‘about oneself’ seems a misnomer. Despite this, an overly broad reading of GDPR has seemingly prevailed. Instead of operating on the basis that regulation is only necessary where there is evidence of consumer detriment (the barrier for intervention in UK consumer protection law), a catch-all, hazard-based approach has stifled innovation in the technology sector. Without a robust specification of harm, GDPR has acted as the bogeyman of European business, whose mass-engagement of specialist consultants indicates a deep fear of falling foul of the weighty fine non-compliance carries that a clean hands policy could not allay. According to one PwC report, on average, firms expected to spend $1.4 million to ready themselves for GDPR and avoid the maximum financial penalty of the larger of 20 million euros or 4 percent of annual revenue. [i] 

Meanwhile, the upsides of GDPR have been contrastingly murky. As Janßen et al found, the frequency of apps requesting (arguably) sensitive permissions fell by 6.5 percent following the imposition of the GDPR. However, this statistic is complicated by the fact that privacy-sensitive permissions had already declined before the enactment of GDPR. The annual value of the decreased data collection to each consumer is estimated to be worth between $1.13 and $11.25 annually, falling short of the authors’ estimates for the welfare loss to the consumer and producer. At least from a utilitarian perspective, the costs of GDPR seem to outweigh its benefits. 

“So what?”, might be the initial response of many. Indeed, talk of a “lost generation of apps”, fails to engender the same fear and urgency as “commercial surveillance” invariably does, a term which has caused the Adtech industry a great deal of reputational damage of late. For those who do not like the idea of people using data full stop, appeals against GDPR based on reduced commercial innovation tend to fall flat. The opposing line, firmly toed by many in Brussels, is that the interests of business should not take precedence over those of the individual, which is fair. However, this argument is simply a red-herring in the case of GDPR:  to equate low-risk, secure data exchanges with the sharing of sensitive, personal data is of zero benefit to the user and hurts consumer choice. 

The only winners of this approach are the largest technology companies, who benefit enormously from harsh restrictions on others’ handling of data as it reduces competition. Take Google, for instance. Before the introduction of GDPR in 2018, Google shared anonymised user IDs with any advertiser through their Google Data Transfer system, whilst imposing sufficient safeguards and technical mechanisms to protect the end user. [ii] The same system exists today. The only difference is that granular user-level data on Data Transfer is now only available to a limited number of partners. Thus, Google replaced a more egalitarian system of data exchange to a patently discriminatory one, consolidating its control of the supply chain, all, supposedly, in the name of protection of personal data. Putting barriers to the flow of data is not just costly to the government and the consumer, it exacerbates this evidently unbalanced field, especially where it prevents using data to compete. 

The unwitting alignment of Big Tech and the technophobes on GDPR, then, amounts to a modern day “Bootleggers and Baptists” coalition, no less striking in its tension.

Fundamentally, data exchange is a tool that, of course, has the potential to be misused, but we should not be blind to its benefits. Concession must be offered to the fact that exchanges can be wholly positive as well as innocuous. Indeed, in 2021, the Ada Lovelace Foundation published a paper entitled, ‘exploring legal mechanisms for data stewardship’, in which the authors outline a number of proposed initiatives for data exchange between third parties for the public good. [iii] For instance, following the outbreak Covid-19, the Emergent Alliance was founded in order to share ‘knowledge, expertise, [and] data’, in order to aid the global pandemic recovery. 

GDPR would seem then to be an unfortunate case of throwing the baby out with the bathwater. Its hazard-based approach has stymied innovation and competition for the sake of poorly defined, much less benchmarked, benefits. However, with the UK outside the EU it will be interesting to see whether reassessments like the Data Protection Bill alter the direction of travel. 

There is cause to be hopeful. Chapters one to four of the ICO’s new draft guidance clarifies three distinct categories of data: Identity-linked, De-identified and Anonymous data. There is a pragmatic approach to risk analysis to focus on the cases where there is a reasonably likely risk of harm or abuse, allowing targeted approaches. The guidance sensibly concludes that the appropriate risk analysis ought to be “reasonably likely” as opposed to “ever possibly”. This has clarified that handlers would be free from undue regulation when operating Anonymised or De-identified data sets.

The expected changes to the UK approach would pave the way towards greater consumer choice and producer innovation. They will help to push back against bureaucratic and Big Tech restrictions. They will help to strike a reasonable balance between the freedom of business and the individual which has been lacking under GDPR.

It takes imagination to see an “unseen” loss like the lost generation of apps, but the indications are that the UK ICO and CMA can see the point that important data rights can be protected while avoiding costly overreach into cases where risks are low.

For further information on the associated problems of overly broad data protection laws, please follow the link here to Stephen Dnes’ piece ‘Big Data Protection: Big Problem’.


[i] GDPR-Infographic-design-v4 (insight.com)

[ii] What is Ads Data Hub? | Croud Digital Marketing Blog

[iii] Exploring legal mechanisms for data stewardship | Ada Lovelace Institute

Header image courtesy of creative commons (licensed under the creative commons free license)