top of page
Search
tunlacamamaluc

WhatsApp is finally going after outside firms that are abusing its platform: The history and context



The scale of online child sexual abuse is huge. Year after year, child protection agencies report increases in the amount of abuse found online and say things have got worse as more children have been at home during the pandemic. Last year, the NCMEC received 21.4 million reports of online child sexual abuse material. Across all of the companies that reported content, Facebook accounted for 20.3 million, or almost 95 per cent, of that total. Child sexual abuse material reports have swelled due to better technology being used to find it in recent years. And Facebook has been more aggressive at detecting and finding child sexual abuse material than many other tech firms, experts say. But the impact of turning on end-to-end encryption across Instagram and Messenger is still likely to be significant.




WhatsApp is finally going after outside firms that are abusing its platform




At the heart of the debate is an alarming claim: that turning on end-to-end encryption on all messaging platforms and social networks by default would stop law enforcement from being able to catch people like Wilson. But would it? And what, if anything, can be done about it without breaking encryption? The issue is set to define the future of online communication but, after decades of debate, there remains no easy answer, no magic bullet. So what happens next?


However, over recent decades, a successful movement to narrow the application of antitrust laws to a limited consumer welfare standard has allowed monopolies to flourish across industries. The anemic antitrust enforcement that has resulted has enabled increased concentration of power in many sectors, including technology and online services markets.156 Existing authorities are limited in their abilities to increase competitive pressure on already dominant firms. Furthermore, limitations exist in addressing market dominance arising from inherent network effects; conventional antitrust does not necessarily forbid monopoly in the absence of exclusionary, improper, or predatory acts. Where applicable, antitrust tools can be slow: With important exceptions, such as merger reviews, many are limited to after-the-fact intervention. These qualities have hampered antitrust effectiveness in the online services space, where remedies are sometimes pursued too late.


An opt-in approach offers a degree of future-proofing that may be difficult to provide through statutory definition alone. Allowing companies to generally opt in ensures that only those that consider themselves infrastructural and understand the requirements choose this model; this is expected to comprise a minority of online services overall. This approach may enable new companies to start with the explicit goal of competing as online infrastructure and would offer all infrastructural companies a strong defense against the technical, legal, and public relations costs resulting from good and bad faith demands for increased content moderation lower in the stack. While challenges exist around the incentives and consequences for infrastructure providers in and outside of the tier, those opting in would be regulated by an entity that prioritizes the goals of online infrastructure. Infrastructural firms outside the tier will have to deal with rules designed for broader online services or gatekeepers and deal with the business realities of any potential intermediary liability changes. Business customers will be able to exercise choice in determining which online service provider may best meet their infrastructural needs.


This report does not envision that payment processors or decentralized payment platforms would be eligible to opt in to the online infrastructure class in its initial establishment. Financial transactions are too enmeshed in the existing financial regulatory system, fraught with broader policy implications that require deeper consideration outside the scope of this report.


Regulatory effectiveness faces a host of challenges, including regulatory capture, enforcement failures, difficulty for users, and a range of capacity and cultural constraints.286 These factors present a strong argument for tools that are self-administering where possible, including structural separation and clear statutory lines for highly problematic practices. But as discussed above, there are limits to the ability of statutes to fully address the range, variety, and dynamism of some online services markets. Principles-based rule-making powers can offer a powerful complement to clear statutes in addressing complex, emerging issues and balancing conflicting priorities. New and existing statutes and rule-making powers will all need to be brought to bear in combination, despite the particular shortcomings of each. Shedding new light on longstanding administrability challenges is outside the scope of this paper. But going forward, these challenges should not be underestimated, nor should they serve as a barrier to action.


Interoperability allows users to depart Facebook for rival platforms, including those that both honor the GDPR and go beyond its requirements. These smaller firms will have less political and economic influence than the monopolists whose dominance they erode, and when they do go wrong, their errors will be less consequential because they impact fewer users.


Cloud computing services also moderate, not by removing particular bits of content, but by rejecting whole sites or entire platforms. Typically, services like Amazon Web Services or Microsoft Azure claim a position of neutrality, preferring not to be in the business of picking and choosing, drawing on the protections of Section 230 of the Communication Decency Act (CDA 230) and the sensibility of net neutrality enjoyed by ISPs. At the same time they reserve the right, in their terms of service or contractual agreements, to drop any client for a wide range of reasons. 3 The decisions they make do not look like platform moderation, in that they are not procedural, consistent, or accountable: most happen in the context of a specific business relationship, where a problematic client will be quietly released from their contract and urged to find another provider. This was apparent when Microsoft was accused of threatening to ban right-wing social media platform Gab after a complaint came in to customer service; Microsoft apologised for the confusion (Lecher, 2018), and Gab made noise about its rights being threatened. But later, Microsoft urged their client to leave, a move that had much the same effect (Ingram, 2018). This is content moderation, by other means.


The focus on established social media companies means that both the debate and the regulations it produces are likely to be ill-suited to the growing pains of flash-in-the-pan apps. Do startups need looser regulations so they can grow? Or do they need stricter rules, because a lack of regulation might make their users more vulnerable to harm? It depends on what you are asking, and regulation that is attentive to this category of companies has to wrestle with both possibilities. But policies that only imagine established platforms like YouTube and Facebook are never going to accommodate the unique challenges popular-by-surprise apps present. To address this, policymakers could commit to supporting startups when they grow faster than planned and do not have enough staff for content moderation, or could provide new social apps with the knowledge and expertise to develop better, safer systems.


In the past few years, scholars, activists, and journalists have done invaluable work that has helped us further understand the complex infrastructures of content moderation (Gillespie, 2018; Roberts, 2019; Suzor, 2019). But even as conversations about governing content on platforms have captured the public conversation, rapidly becoming one of the most talked-about aspects of information policy in Europe and North America today, our understanding of the political and regulatory dynamics around content moderation remains limited. How are the content policy processes of companies like Facebook affected by pressure from policymakers, and shaped by regulatory commitments made in various jurisdictions? What are the strategies that policymakers use to get firms to change their rules, either regionally or globally? What are the factors that determine the success of these efforts? These are critical questions that will require interdisciplinary policy and legal work as content moderation continues to become a hotly contested global public policy issue.


Meanwhile, just as the public has become aware of content moderation, it has also grown in importance for the digital services that require it. Just a few years ago, the major Silicon Valley social media firms still relegated large-scale moderation of user-generated content (UGC) to an afterthought, at best (Chen, 2014). One issue that C-suite denizens were most likely to avoid was the way the practice of moderation, both mission-critical (from a brand protection and advertiser relations perspective) and also one of the most stigmatised parts of their media production chain, puts the lie to the claim that these global social media firms were mere engines of free speech. Firms avoided conversations about content moderation whenever possible, choosing instead to wow a largely co-opted tech media sector with the latest advancement in functionality (Stewart, 2017).


The politics of platforms are now subject to much greater public debate. Regulation is no longer so unfathomable, with some regions of the world (e.g., the EU and its member states) much more aggressive toward social media than others (Knight, 2018). In late 2019, the US Congress began to revisit CDA 230. At a 2019 hearing, attorney Katherine Oyama of Google told legislators that Google and other firms were adding CDA 230-like clauses to antidemocratic and typically secret covenants such as multinational trade agreements. Rather than see CDA 230 fade into irrelevance, it seemed that the firms had found a way to expand its reach (House of Representatives, 2019). 2ff7e9595c


0 views0 comments

Recent Posts

See All

Comments


bottom of page