A short history of online platform transparency

“Platform transparency” refers to the extent to which the inner workings of a digital platform are open and accessible to the public. As such, it can include things like making recommendations and decision algorithms open and accessible to users, providing users with clear and detailed information about how their data is collected, used, and shared, and giving users the ability to control their data and privacy settings. Hopes for a transparent internet were present in the technological optimism of early discourses on digital technologies. Platform transparency discourse is concerned with social responsibility and the rhetoric of openness, democracy, and accountability. Initially seen as an inherent property of digital technologies and the internet, transparency became more of an ideal to strive for. A perception of the democratic responsibility of the internet giants (such as Alphabet/Google and Meta/Facebook), for example regarding the political rights of their users throughout the world, has brought the notion of transparency to the centre of online platforms’ rhetoric and activism.

Platform transparency discourses evolve as companies attempt to respond to the perceived need for more transparency. One such example is Google, which published the first transparency reports in 2010. Each iteration of transparency reports (or other initiatives) show innovations of discursive, normative, technological, institutional, and agonistic elements. Thus, rhetorical, technological, and ideological elements persist throughout extended periods, resulting in a layered discursive and institutional order, along with discontinuities in approaches to transparency in platform companies’ discourse and practice. There are also fragments and contradictions in discourse and practice, stemming from changes in context and strategy.

Between 2010 and the present, the emphasis of internet platforms’ transparency discourse and practice has shifted from the deontological, more noticeable during the initial phase, to the more institutionalized, coherent, and centralized but increasingly complex discourses and practices in recent transparency reporting.

The 1st phase: after the Arab Spring

The beginning of the first phase roughly corresponds to the period of 2010-2012, during the events of the Arab Spring and after the famous “Remarks on internet freedom” speech by the U. S. Secretary of State Hillary Clinton. Generalized optimism about the economic and democratic potential of social networking services at this time produced glowing assessments of the role the companies and their platforms had to play on the geopolitical level. Former US State Sec. Clinton’s speech hailed the arrival of a “new nervous system for our planet” in the form of the internet and social media, and explicitly called for US media companies to challenge foreign governments employing censorship and surveillance.

Platform companies’ notions of transparency focused on internet disruptions, dissent suppression trends, and state overreach, and highlighted a notion of transparency based on quantification and disclosure. The Global Network Initiative, established during this period, represents one of the sets of partnerships aiming for governmental “challenges to internet freedom”. The purpose of transparency reports (TR) and similar initiatives was to shed light on external interferences to the companies’ activities, particularly governmental interference. Transparency reports by Google, for instance, quantified judicial and governmental demands for data removal or user information, from 2010 onwards.

The 2nd phase: Snowden and surveillance

A second phase maps onto the Snowden revelations of 2013. In this phase, the discourse and practice of transparency continued to respond to takedown requests and demands for user information on the part of sovereign states’ governments and judicial systems. However, companies focussed on demonstrating pushback against state surveillance – the extent of cooperation with intelligence services – ostensibly to protect user privacy, the community, and their information from the overreach disclosed by Snowden, most clearly through the introduction of privacy tools for users. For example, a Microsoft weblog post stated: “Just as we called for governments to become more transparent (…), we believe it is appropriate for us to be more transparent”.

This was also the reason behind the release of transparency reports (quantified biannual reports of the number and nature of governmental requests) by other companies. In 2013, Facebook started publishing their “Global Government Requests Report”. Facebook also adhered to a general commitment to curbing extremist (typically, terrorism-related) content, both as a sociopolitical goal (i.e., legal compliance and political alignment) and as a “community”-oriented goal. Expunging the platforms of terrorist content was presented as a user-protection measure, but it was also critical to avoid the perception of online platforms as tools for radicalization, even if it remained in tension with privacy-centring discourses.

The 3rd phase: disinformation and political advertising

A new phase, with clear discontinuities both in context and in practice, started in 2016 with the “Brexit” referendum in the UK, the US presidential elections, the Facebook-Cambridge Analytica data scandal, and reports of manipulation of political advertising – something we witnessed anew in 2020. In this case, the focus of transparency discourse shifted towards misinformation, content moderation, and paid political advertising.

In the wake of a shift of public attitudes towards platform companies known as the “techlash,” governmental technology assessment reports voiced stronger concerns about the absence of regulation, along with misgivings about the future of privacy and accountability. Statements published by Facebook and Google acknowledge social and political concern over the spread of misinformation on the platforms during election campaigns. In contrast to the commitment to supporting political movements in the first phase, in 2018 Zuckerberg vowed to ‘defend against election interference’ on Facebook. Google also announced, in May 2018, new policies, new policies requiring ID for political advertisers in the USA, the inclusion of election transparency information in their transparency reporting project, in tandem with protection tools for users and services “who are at particularly high risk of online attacks”.

The 4th phase: norms and institution-building

A fourth phase, building upon structures and norms of earlier ones, corresponds to the emergence of a more coherent apparatus of transparency, deployed by the internet giants as a network of norms, policy decisions, lobbying, governance structures, rhetoric, technological tools, and, crucially, partnerships. It is not so much a follow-up, or a quantitative development of previous discourses and initiatives, but a qualitatively distinct way of addressing ongoing challenges. In November 2018, Facebook’s CEO refers to their approach as a “full system addressing both governance and enforcement”.

At this time, Facebook / Meta, Google, Microsoft, and Twitter, among others, expressed a will to cooperate with governments to act on regulation proactively by demanding to (1) be a part of the legislative process to achieve the “right balance” of regulation, (2) be allowed to continue to expand operations, including deployment of new technologies, (3) limit external oversight through strict information controls and institutional innovation, and (4) implementing new industry partnerships on privacy and other standards. This period sees a strong centralization effort taking shape, both in the development of stricter norms through Terms of Service and Community Standards and in the institutionalization of transparency practices. Centralized hubs, such as Microsofts’ Reports Hub, were normalized for transparency-related information, and official oversight initiatives. However, information sharing through Application Program Interfaces (APIs) was limited, in a move researcher Axel Bruns (2019) called “ the APIcalypse”. This limited the ability to independently verify the claims of success in improving the platforms’ transparency and governance.

In sum, transparency initiatives are typically seen as largely voluntarist, deontology-based initiatives. However, platform transparency follows a conceptualization of government, corporate, and technology transparency imbued with quantitative, disclosure-centric reasoning reaffirming corporate normative and technological control. The interplay of disclosure and opaqueness, power, normativity, and agonistic elements defining an apparatus, as expressed in discourse, practices, and institutions, opens the opportunity for a more grounded analysis of the governance of online platforms, especially concerning the deployment of new forms of algorithmic and commercial power.

Reference

Bruns, Axel. (2019). After the ‘APIcalypse’: social media platforms and their fight against critical scholarly research. Information, Communication & Society, 22(11), 1544-1566

Conspiracy speakers’ criticality: too little or too much? A Rhetorical Reflexion on Conspiracy Theories

If the well-studied phenomenon of conspiracy theories still catches our attention, it is among other reasons because it crystallizes many aspects of our society. For example, our relationship with the media, the notion of transparency, the phenomenon of fake news, but also our ability to live together and make society. In this post, we will focus on the relationship that conspiracy speakers build within their discourses with the notion of “truth” as a value. Our hypothesis is that conspiracy speakers are too confident about this notion; instead of being critical and doubtful about events – as they seem to appear at first – they are instead too sure of being right. In this perspective we will argue that within conspiracy discourses, truth as a value is paradoxically based mostly on the character of the speaker and not on the reasoning they expose.

Lire la suite »

Comprendre les logiques médiatiques des mooks

Depuis une bonne dizaine d’années, les mooks font le pari d’un slow journalism fortement subjectivé, d’un important travail d’élaboration matérielle du support, et d’une distribution en librairie. Le présent collectif, issu d’un colloque tenu en 2014, entend apporter une première contribution à la compréhension de ce qui constitue, non plus une simple mode éphémère, mais plutôt le signe d’une mutation profonde des formes journalistiques.

Lire la suite »