Privacy vs. Surveillance: where do we Put the Slider?

Privacy and surveillance ascribe the level of liberty our society can afford today. They define how companies can monitor and automate individuals, driving them towards a passive state where technology can be used to deny their rights and liberties. For companies keen to evolve in the Information Age, people end up being perceived not as consumers or citizens, but as information pools and information fields, ready to be mined and harvested. 

Data: a means of social control and business power

If privacy has become such an important issue, it’s because the tight grip which big tech companies have over the data of their users is the basis for their dominance, and possibly for the way they abuse it. In other words, information capitalism has turned data into a means of social control, and privacy harms can trigger a wealth of related harms, chilling freedom of speech, consciousness, association, etc. 

But concerns for privacy are often described as being anti-business; Richard Posner, the American jurist and economist, famously argued that individual privacy hindered capitalism by interrupting the free flow of information that markets need to be efficient. Andreas Mundt, head of Germany’s antitrust regulator, took Posner at his word when he explained that “data can provide market power”, and this market power means a transformative power over society. In February 2019, he did not hesitate to pronounce a surprising condemnation of Facebook, arguing in a 300-page ruling that the company could only gather so much data because it was in a dominant position and that this data would, in turn, increase the dominance of their position

And it doesn’t stop there. In 2013, Edward Snowden took tremendous risks to reveal the active role big tech companies play in state surveillance, denouncing “the blurring of public and private boundaries in surveillance activities” as well as “collaborations and constructive interdependencies between state security authorities and high-tech firms”. It’s a vicious circle. But it’s a lucrative one. It’s no surprise that Peter Thiel once famously said “competition is for losers”. 

As of today, text messages and mail apps are being controlled by Apple and Google, Facebook bought Whatsapp and Instagram, Snapchat bought Zenly, and Microsoft bought Linkedin. Only a few select companies are now able to operate worldwide, and they try as hard as they can to forbid new ones to emerge. 

In 2011, Cory Doctorow declared at the Chaos Computer Club: “The world we live in today is made of computers. We don’t have cars anymore; we have computers we ride in. We don’t have aeroplanes anymore; we have flying Solaris boxes attached to a bucketful of industrial control systems. A 3D printer is not a device, it’s a peripheral, and it only works connected to a computer. Radio is no longer a crystal: it’s a general-purpose computer, running software.” 

The issues of competition, democracy, privacy and surveillance are thoroughly interconnected. Together, they frame the digital question. Answers used to come in the form of meta-regulation such as net neutrality – a concept propounded by the American expert Tim Wu in the paper: Network Neutrality, Broadband Discrimination – but it’s now clear that it is not enough as it’s only about tube neutrality. 

The new cognitive habits normalised by Facebook, Snapchat and Google have deep, near-spiritual consequences. The new UX/UI experience transforms our perception and our reflexes. Augmented reality and virtual territories superimpose with our normal, physical environment. The former concepts of presence, distance and inclusion take on different meanings. 

The results are already here and create many complex things we don’t fully understand today: how will competing algorithms work together? Will they provoke market crashes? Should we allow them to decide if a student is allowed to enter one school or another? 

From privacy to personal data

People were already asking these questions back in 1978 when the first French and German laws on data regulation were enacted – beginning with the claim that computers should be at the service of citizens, and that no decision concerning their rights should be entirely automated. These regulations stood the test of time and are still surprisingly relevant today. Indeed, they were translated into a directive in 1995, and they then formed the basis of the infamous European GDPR regulation (General Data Protection Regulation) in 2016. But they were adapted to a new context, including several competition provisions in order to try to do with data protection what appears to be difficult with antitrust regulations – thus, the GDPR on compliance, data portability, transparency and the increased importance of regulatory authorities. 

It’s true that the EU is not a big player when it comes to big tech companies, with only eight companies out of the worldwide top 100. But with its rich, open, liberal and pro-market social democracies, the EU is the economic engine of the digital industry. Big tech companies enjoy their biggest market shares in the EU and more than a third of their revenues. The EU market is extremely open, much more than the US market and its Asian counterparts. It’s only logical that other players are beginning to take interest in it, mainly China and Russia, with a focus on messaging. 

Despite its lack of big tech companies, the EU is becoming the regulatory arena of the rest of the world. It’s the place where laws are being written, fines are being sentenced, lobbying is happening and ideas are being tested – Germany forbidding Facebook to mix the data it gathers from its various services, including WhatsApp and Instagram. In Europe, privacy is sacred – thus worthy of special protection – as personal data, for instance, is perceived as an extension of the human being. These European ideas have far-reaching consequences. China and India now have their own GDPR-inspired legislation, and in June 2018, even the US Supreme Court struck a serious blow to the third-party privacy doctrine – the view that one can have not privacy expectations when sharing information with a third party. 

Even if local particularities or interpretations can complicate the implementation of a global framework, a growing worldwide consensus is now emerging on privacy in the digital age, or more generally, on the digital question. It’s not so much about drawing a line between privacy and surveillance, but understanding that even when people share their data, privacy protection should travel with it, and apply to the digital environment where they should protect the virtual persona of the individual. The concept is clearer when referring to the French notion of “personal data” rather than “privacy”. 

In the end, it’s all about contextual regulation, depending not only on how the data is being used, but also what for, where, and by whom – with an increasing view that the regulations applying to these data must protect not only privacy but also competition and democracy. 

It’s only the beginning. 

Jean-Baptiste Soufron – Partner at FWPA Lawyers in Paris, former General Secretary of the French National Digital Council

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

Who Are We?

The foundation gathers thought leaders, researchers, decision-makers, from Asia and Europe, to lead working groups and research projects on the positive impacts of artificial intelligence on our society.

© 2018 Live With AI | All Rights Reserved