At the Participatory Platform Governance Lab, we investigate how users experience, enact, and challenge the governance of social media platforms. Our goal is to understand the possibilities for political participation in sociotechnical systems and incorporate user perspectives into policy recommendations.

Publications

Aspirational Platform Governance: How Creators Legitimise Content Moderation through Accusations of Bias

While content moderation began as a solution to online abuse, it has increasingly been framed as a source of abuse by a diverse coalition of users, civil society organisations, and politicians concerned with platform bias. The resulting crisis of legitimacy has motivated interest in more participatory forms of governance, yet such approaches are difficult to scale on platforms that lack bounded communities and designated tools to support collective governance. Within this context, we use a high-profile debate surrounding bias and racism in content moderation on YouTube to investigate how creators engage in meta-moderation, the participatory evaluation of moderation decisions and policies. We conceptualise the conversation that plays out across a network of videos and comments as aspirational platform governance, or the desire to influence content moderation without established channels or guarantees of success. Through a content analysis of 115 videos and associated online discourse, we identify overlapping and competing understandings of bias, with key fault lines around demographic categories of gender, race, and geography, as well as genres of production and channel size. We analyse how reaction videos navigate structural factors that inhibit discussions of platform practices and assess the functions of aspirational platform governance, including its counter-intuitive role in legitimising content moderation through the airing of complaints.

Copyright Callouts and the Promise of Creator-driven Platform Governance

Responding to frustrations with the enforcement of copyright on YouTube, some creators publish videos that discuss their experiences, challenge claims of infringement, and critique broader structures of content moderation. Platform callouts, or public complaints about the conduct of or on platforms, are one of the primary ways creators challenge the power imbalance between users and corporations. Through an analysis of 135 videos, we provide a rich empirical account of how creators publicly define the problem of copyright enforcement, propose solutions, and attribute responsibility to other creators, the platform, and external actors like media conglomerates. Creators criticise the prevalence of “false” copyright claims that ignore fair use or serve ulterior motives like harassment, censorship, and financial extortion, as well as the challenges of communicating with the platform. Drawing inspiration from organisational theory, we differentiate horizontal and vertical callouts according to the institutional positioning of the speaker and target. Horizontal callouts, or public complaints between peers, offer a mechanism for community self-policing, while vertical callouts, or public complaints directed towards organisations, provide a mechanism for influencing centralised content moderation policies and practices. We conclude with a discussion of the benefits and limitations of callouts as a strategy of creator-driven platform governance.

Projects

It’s Dangerous to Go Alone: Technocultures of Community Governance on Bluesky

Alternative platforms are experimenting with new forms of decentralized, human-driven content moderation, enabling users to play a greater role in shaping their experiences of social media. Bluesky is one of the most successful contenders according to the number of active users. It also has distinctive features for decentralized platform governance, including tools for curation like custom Feeds and Starter Packs, a recommended list of users to help people find a community and get started on the platform, and for content moderation like Moderation lists of accounts to block or mute and Label services which can independently annotate and act on accounts and content on the platform, at least for their subscribers. These features enable users to modulate visibility on Bluesky, suppressing some accounts while amplifying others in a thoroughly collaborative process between humans and machines that sharply contrasts with the automated forms of content moderation and recommendation ubiquitous on mainstream social media. Such tools constitute middleware, a new layer of content moderation that allows social media users to customize the algorithms powering their social media feeds. The distinctiveness of these features has prompted a conversation about platform governance on the platform itself, with users debating the merits, limitations, and values of centralized versus community-driven approaches to shaping algorithmic culture. This study examines the promise and practice platform governance on Bluesky through a large-scale analysis of custom feeds, moderation lists, starter packs, and labeling services, as well as a discourse analysis of public commentary and the platform’s vision of governance as expressed in corporate blogs, public statements, and media coverage. In so doing, our paper offers an empirically rich analysis of alternative forms of platform governance, essential to understanding the social dynamics of Bluesky and the broader possibilities for content moderation beyond centralized and automated systems.

From Community Guidelines to Industry Standards: Mapping the Policy Priorities of Mainstream, Alternative, and Adult Live Content Platforms

Despite growing concern over the standardization of content moderation, there has been little empirical investigation beyond mainstream social media. We developed a novel approach to compare rules and policy priorities within Community Guidelines based on categories from the Trust and Safety Professionals Association. We focused on livestreaming, a particularly challenging format to moderate, and asked: what policies govern content? And how do mainstream, alternative, and adult content platforms differ? We analyzed 12 platforms and identified four orientations towards industry standards: the mainstream ideal, the regulatory competitor, the alternative ethos, and the overlooked concerns. These orientations partially map onto divisions between mainstream, alternative, and adult livestreaming platforms, allowing us to pinpoint different factors driving the adoption of industry standards. Finally, we discuss the tradeoff between free expression and sexual expression, highlight epistemological considerations regarding the use of policy documents, and conclude with an agenda for future comparative research.

Community Notes as Participatory Strategy of Consumer Protection

X — then Twitter — launched Community Notes, a crowd-sourced fact-checking program, in 2021, allowing participants to attach “notes” that contextualize, contest, or clarify posts on the platform. While Community Notes engages in conventional fact-checking tasks of verifying news and political discourse, it also plays an important role drawing attention to spam, scams, fraud, and other consumer protection issues on the platform. Through a combination of qualitative and computational text analysis of consumer protection-oriented Community Notes, we identify the types of consumer protection issues that the program flags, the sources of evidence participants use, and the relationship between the presence of community notes and other top-down content moderation responses (removal of post, removal of account). We then reflect on the potential and limitations of participatory approaches for addressing consumer harms on social media.

 Transactional Orders: How Platforms Structure Payments Between Creators and Fans

Subscription platforms like Patreon or OnlyFans, fundraising platforms like Kickstarter, or donation tools built into video platforms like Twitch or Stripchat reconfigure the relationship between creators, audiences, and platforms. While research has highlighted the impact of new monetization opportunities for creators and fans, the role of the platform has received comparatively less attention and is hindered by a lack of shared terminology, comparative research, and the bracketing off of adult content platforms. We present an integrative framework for conceptualizing monetization on digital platforms, connecting anthropology’s veteran concept of transactional orders to more recent work on platformization. We developed the transactional orders framework through an in-depth investigation of livestreaming and camming platforms. We surveyed the literature to identify relevant features, policies, and concepts related to monetization and conducted empirical research on three livestreaming and three camming platforms to develop platform-agnostic concepts and definitions. The transactional orders framework consists of payment paths, or mechanisms that facilitate the transmission of value between users, and measures of value, or commensurable representations of worth on the platform. We identified three primary payment paths (donations, subscriptions, and purchases) and three primary measures of value (tokens, social metrics, and rankings), as well as seven attributes to assess each component. We illustrate the value of the framework through a discussion of donation mechanisms across platforms.

Legal Lore: The Cultural Transformation of Copyright Law and Policy on YouTube

Since YouTube’s launch, copyright has been at the center of conflict between creators, regulators, the platform, and mass media corporations. Yet the interests of creators are poorly represented in copyright policy, which favors corporate stakeholders. Largely excluded from formal mechanisms of policy participation, creators are left to make sense of complex enforcement systems through copyright gossip and callouts. Although informal strategies of gossip and callouts operate quite differently than the formal mechanisms of governance built into the platform’s infrastructure, the two are inextricably linked. To understand how creators relate to these different governing regimes, we develop the concept of legal lore, defined as cultural understandings of permissible behavior and appropriate redress for wrongs. We investigate how creators and fans understand the appropriate use of intellectual property and copyright reporting tools through a unique dataset of 154 copyright controversies documented on Wikitubia, a fan wiki dedicated to the platform. We identified four primary types of controversies. Interpersonal controversies involve disputes between creators and fans. Economic controversies involve disputes over money, merit, and working conditions. Political controversies involve the suppression of speech, especially criticism. Infrastructural controversies involve automation issues, typically in terms of YouTube’s ContentID system not working correctly.  While legal lore can be more or less proximate to the law, the cultural understandings of copyright expressed on Wikitubia present a relatively autonomous vision of copyright abuse and the appropriate use of platform reporting tools, with only marginal ties to platform policy or copyright law.

The TikTok Caliphate: How Fundamentalist Islamists Exploit and Bypass TikTok’s Algorithm

Islamic terrorist organizations have increasingly turned to social media platforms to spread their propaganda. Examining their presence on TikTok, this study investigates how supporters of ISIS and Al-Qaeda exploit platform features to manipulate recommendation algorithms and evade automated and manual content moderation. Despite TikTok’s policies and national laws, little is known about the specific strategies these groups employ to increase the visibility of their messages within a purportedly hostile platform environment. Our qualitative analysis reveals five key strategies: Audio Camouflage (manipulating sound to evade detection), Meme Infiltration (embedding extremist content within pop culture references), Blurred Intent (masking visuals through blurring or digital distortion), Emoji Codes (using coded language and symbols to bypass moderation), and Bait-and-Switch (starting with harmless content before revealing extremist messaging). These methods demonstrate how extremists adapt to platform Community Guidelines while exposing the limitations of TikTok's moderation system and national enforcement mechanisms. The study underscores the urgent need for improved governance, culturally informed moderation practices, and collaborative efforts between platforms, governments, and educational systems to effectively combat online radicalization and extremism.

Team Members

Isabell Knief

Isabell Knief is an MA student at the University of Bonn and a visiting research fellow at the Hebrew University of Jerusalem. She examines how digital platforms (re)produce power relations in labor, creating new opportunities and vulnerabilities for workers, as well as posing novel regulatory challenges. Her master's thesis examines informal counter-practices that webcam models use to influence the algorithmic work environment and assert their interests.

CJ Reynolds

CJ Reynolds is a PhD Candidate in the Department of Communication and Journalism at the Hebrew University of Jerusalem. CJ researches the role of institutional mistrust in state and platform contexts, and the development of counterpower tactics to push for transparency and accountability from institutions.

Omer Rothenstein

Omer Rothenstein is an MA student in the Department of Communication and Journalism at the Hebrew University of Jerusalem. As a Bachelor of Computer Science and Communication and Journalism, he studies how technology and society converge and coalesce, with a focus on digital culture and internet platforms.

Dana Theiler

Dana Theiler is a dual BA student in Communication and Philosophy at the Hebrew University of Jerusalem. With experience as a marketing manager working with various social media platforms (TikTok, Instagram, Facebook, and more), she examines the power of social media for self-promotion, cross-platform promotional strategies, and the power relations between platforms and users.

Noa Niv

Noa Niv is an MA student in the Department of Communication and Journalism at the Hebrew University of Jerusalem. With a bachelor’s degree in Communication and Journalism and East Asian Studies, she explores cross-cultural interactions on social media, focusing on the dynamics between Western and East Asian individuals in the context of popular culture and online fandom.

Yehonatan Kuperberg

Yehonatan Kuperberg (Kuper) is an MA student in the Department of Communication and Journalism at the Hebrew University of Jerusalem. He holds a bachelor's degree in Communication & Journalism and Political Science, along with personal experience in TV and news production. He explores the relationship between traditional producers and their covered agents or viewers and how they perceive television text or media production considerations.

Gilad Karo

Gilad Karo is an MA student in Communications & Journalism, specializing in Internet and New Media at the Hebrew University of Jerusalem, holding a dual BA in Communications and International Relations. As a research assistant in multiple projects, Gilad explores the intersection of politics, online radicalization, and international relations in the media. Gilad's work examines platform governance, the influence of digital spaces on global political dynamics, and the evolving challenges of regulating online discourse. With experience in both research and policy, Gilad has interned at the Knesset and the INSS, applying theoretical academic research with real-world policy applications.