Technology-related anxiety generally concerns peoples’s fear of being replaced by machines; such a fear is conceived by the imagination of classic science fiction stories. However, a perhaps more palpable question, given the moment of technological evolution in which we find ourselves, is the privacy issue, which has become more and more trivialized as the culture of emancipatory technology grows.
Have you ever heard the infamous phrase “I have nothing to hide”? It is used by many to justify that there is no need for a more robust privacy protection, in the sense that only those who practice immoral, unethical, illegal or socially unacceptable acts should be concerned with defending their right to privacy.
This line of thought, added to the indiscriminate sharing of personal information on social media, inspired scholars and Big Tech higher-ups to defend the thesis that current network users do not care so much about their privacy as they claim to — a phenomenon called “privacy paradox” —, thus leading some to believe it is dead.
The big problem with reducing privacy through the use of certain technologies is that this total transparency of life, which is encouraged by Big Techs, is not just a matter of others kowning the restaurants where you eat, what the destination of your last trip was and who your new boyfriend is. Your personal data is being used by companies and governments in order to define your future .
Let me explain: Big Techs’ business model is the behavior modulation through surveillance, collection and massive processing of data (Big Data). That is, personal data is provided “voluntarily” by users, and digital “trails” are collected through cookies installed by platforms. This feeds classification algorithms that determine the profile of each user through the analysis of their behavior patterns online, in order to target personalized content and marketing according to the identified profiles.
That is how Netflix “knows” which movies and series you might be interested in, how Amazon “knows” exactly which books to recommend you, and how Youtube “knows” which song or video to play next. The profile characterization technique is used both by marketing companies, which target personalized offers according to the consumer’s profile, and by social networks, which target posts with specific content according to the user’s profile.
Through these algorithms, it is possible to modulate the emotions and behavior of users analysed by them. In 2010, during the US legislative elections, Facebook carried out an experiment with 61 million users, introducing a window in the News Feed to remind them to vote, and for some of them there also were photos of contacts who had already clicked on the “I voted” button. According to the social network itself, this small difference led 340,000 people to vote.
In another experiment , with the goal of evaluating “emotional contagion”, Facebook used the algorithm that distributes posts on the News Feed to verify whether users’ mood could be modulated by the content they had access to. It was observed that users who were in touch with positive contents updated their profiles with positive statements, and those who had contact with negative content manifested themselves negatively on their profiles.
Therefore, contemporary technological capacity allows companies and States not only to easily and quickly learn each citizen’s profile, but also to find that their behavior and emotions are modulated, thus violating rights and guarantees of freedom of choice, access to information, freedom of expression, among others. In addition, there are several other algorithms that perform identification, biometric recognition, prediction, etc. with the purpose of assisting on companies’ and/or governments’ decision-making. However, those technologiws may present racial, ethnic, and gender biases, among others – which will result in discriminatory decisions, hence (sometimes, unfairly) defining the access that many citizens will or will not have certain to opportunities.
In the words of French philosopher Gilles Deleuze, “there is no need for science fiction to conceive a control mechanism that gives, at every moment, the position of an element with open space”. After all, we already live in a society of control, where companies and governments use (sometimes unregulated) technologies to define important issues in the lives of citizens.
During the pandemic, the use of these technologies intensified even more. As the number of Sars-coV-2 infections increased, the use of surveillance technology systems grew in Brazil, an example being the government’s access to data from cellphone operators to identify crowding. According to the government, the data would be anonymized, however, it is known that there is no 100% effective anonymization technique — this is often just a rhetoric on the part of companies (and even of the government) in order to generate a false sense of security in people whose data is being collected.
In that sense, it is clear that the Big Tech and several other companies’ business models, as well as the idea of innovation and development, usually depend on the massive collection of personal data, an action that is being carried out more and more intensely. An example is the Cadastro Base do Cidadão (Citizen’s Base Registry in English), a mega-database of Brazilian citizens created through a Decree signed by President Jair Bolsonaro in 2019.
This and other databases controlled by technology companies and Data Brokers may be used to feed algorithms that can infer emotions, ethnicity, gender, age, political affiliation, happiness, intelligence, use of illegal substances, whether or not the person’s parents are divorced ; they can identify protesters, predict areas where crimes are likely to occur and a myriad of other elements that can be used to promote efficiency and development in different areas. However, they pose several risks of violation to fundamental rights, which are the pillars of Democracy.
The growing use of personal data in monitoring technologies justified with imaginary-appropriating speeches — which often do not match reality — normalizes the violation of privacy and of various other fundamental rights, while self-determination, freedom of choice and further rights are lost along with privacy .
In this sense, without intending to exhaust the debate, I emphasize that, when talking about privacy, it is also a matter of freedom of choice, self-determination, expression, conscience and belief, association, access to information, among other constitutional rights and guarantees. There is talk of ensuring that our data will not be used to either modulate our opinions and behavior or define our opportunities in a discriminatory, unfair way.
The fight for privacy has never been more critical. No, privacy is not dead yet, and we need it so that we can exercise our citizenship with all the rights and freedoms that a democracy guarantees us. So we need to rekindle the debate. Do not fall for the effectiveness-versus-privacy fairytale. We do not need to give up our privacy for society to work, it is quite the opposite.