Site icon The Chenab Times

Meta Struggles to Curb Hate Speech Before U.S. Vote: Researchers

Meta, the parent company of Facebook and Instagram, is facing significant challenges in managing hate speech on its platforms as the U.S. approaches the pivotal 2024 presidential election. Research conducted by the non-profit organization Global Witness, which was shared exclusively with the Thomson Reuters Foundation, highlights the inadequacies in Meta’s efforts to address harmful content in a timely manner.

Global Witness analyzed a staggering 200,000 comments across the social media pages of 67 U.S. Senate candidates from September 6 to October 6, focusing specifically on how Facebook handles reports of hate speech. The findings were concerning: when researchers flagged 14 comments that egregiously violated Meta’s community standards—many of which included offensive references to Muslim and Jewish individuals and derogatory insinuations about a candidate’s sexual orientation—Meta’s response was markedly slow. The researchers reported that it took days for Meta to react to these flags.

Ellen Judson, a researcher with Global Witness overseeing the investigation, commented on the findings, stating, “There was a real failure to promptly review these posts.” Although Meta did remove some of the flagged comments after being contacted directly by Global Witness, the slow response raised alarms about the platform’s ability to maintain a safe environment for political discourse during an election period.

This situation reflects a broader pattern of criticism that Meta has faced for years from various stakeholders, including researchers, watchdog organizations, and lawmakers. These groups argue that the company has not done enough to foster a healthy information ecosystem in the lead-up to elections globally. Just this past April, the European Commission initiated an investigation into whether Meta had violated EU online content regulations ahead of the European Parliament elections, underscoring ongoing scrutiny of its content moderation practices.

Judson highlighted the potential fallout from unchecked online hate speech, noting that “online abuse can have negative psychological impact” and may deter individuals from engaging in political activities. She further emphasized that seeing such discourse can dissuade others from participating in the political landscape, creating an environment that feels unwelcoming. “A small amount of abuse can still do a lot of harm,” she remarked.

Critics have pointed to what they see as a lack of investment in election preparedness at Meta. Theodora Skeadas, a former public policy official at Twitter—now rebranded as X—indicated that the company has made cuts to its staff and resources dedicated to monitoring political content. Reports show that Meta has reduced its workforce in various teams, raising concerns about its capability to effectively oversee harmful content leading up to the elections.

Despite Meta’s claims that hate speech represents a minimal presence on its platforms—approximately 0.02% on Facebook and 0.02%-0.03% on Instagram—experts argue that this statistic fails to capture the broader implications of exposure to hate speech. With Facebook and Instagram being the second and third most popular social media platforms in the United States, respectively, according to the Pew Research Center, the potential impact of such content could be far-reaching. A significant percentage of users, over a third, rely on these platforms for current events information, amplifying the stakes for content moderation.

In the second quarter of 2024, Meta announced that it took action against 7.2 million pieces of content for violating its hate speech policies and an additional 7.8 million for bullying and harassment violations. However, Jeff Allen, a former data scientist at Meta and co-founder of the non-profit Integrity Institute, criticized the platform’s automated systems for flagging hate speech, arguing that they frequently overlook nuanced contexts and can be easily manipulated by slang or indirect language.

Allen further pointed out that Meta tends to be cautious about aggressive content removal, fearing it could negatively impact user engagement. “If you are more aggressive about taking down content, you see engagement go down—there are trade-offs,” he explained.

In a bid to reassure stakeholders about its commitment to election integrity, Nick Clegg, Meta’s president of global affairs, claimed in a February blog post that no technology company invests more in protecting elections online than Meta, stating that the company has dedicated over $20 billion to these efforts. He emphasized the importance of transparency in political advertising and the necessity of combating hate groups operating on the platform.

Nevertheless, despite these assertions, recent reports have revealed ongoing issues with false advertising and election misinformation being permitted on Meta’s platforms. In October, Global Witness conducted tests that uncovered instances of paid advertisements containing election misinformation being approved for publication on Facebook, raising questions about the effectiveness of Meta’s review processes.

In one notable case, Facebook was found to be running ads that falsely claimed the U.S. election could be postponed or rigged, while other reports indicated that e-commerce companies were selling merchandise with similar falsehoods via Facebook. Meta responded to these allegations by stating it was reviewing the matters in question.

As calls for greater transparency grow louder, experts like Allen argue that Meta needs to provide clearer metrics on hate speech exposure, the frequency of posts submitted to human reviewers, and more detailed explanations of its automated moderation systems. The recent phasing out of “CrowdTangle,” a tool that researchers relied on to track viral misinformation, has further fueled frustration among advocacy groups and researchers. Although Meta claims to have introduced new tools for monitoring platform activity, critics assert that these changes do not adequately address the need for oversight.

Global Witness’s findings reveal a troubling lack of engagement from Meta, which did not respond to inquiries regarding the organization’s research, leaving observers in the dark about how the company is addressing hate speech in the critical weeks leading up to the U.S. election. Judson noted that the company’s approach appears to be reactive rather than proactive, stating, “For them, it’s always a ‘catch-up’ situation.”

As the 2024 election draws nearer, the spotlight is increasingly on Meta to demonstrate its commitment to mitigating hate speech and ensuring a safe environment for political discourse on its platforms. Without significant changes, researchers warn that the company’s failure to effectively combat hate speech may have dire implications for democratic engagement in the United States.

(Inputs from a report from Reuters)

The Chenab Times News Desk

Exit mobile version