Big Tech Facing Scrutiny Over Questions of Digital Responsibility

944377872.jpg.1523303802.jpg

On Wednesday morning, Facebook, Google, and Twitter executives went to Capitol Hill to testify at the Senate Committee on Commerce, Science, and Transportation. The hearing, entitled “Mass Violence, Extremism, and Digital Responsibility”, discussed the significant role social media has in spreading and promoting extremist and terroristic acts, as well as rhetoric. This comes on the heels of a series of terrorist events facilitated by social media platforms, such as the Christchurch massacre in New Zealand which was live streamed on Facebook for 17 horrifying minutes. 

 

Senators grilled the biggest players in the social media space on what strategies and policies they are employing to combat this worrying issue. In attendance were Facebook’s Head of Global Policy Management, Monika Bickert, the Public Policy Director for Twitter, Nick Pickles, Google’s Global Director of Information Policy, Derek Slater, and George Selim from the Anti-Defamation League (ADL).   

 

All representatives from Facebook, Google, and Twitter came out with strong statements about their current efforts on this issue.  

 

Monika Bickert revealed that Facebook, in response to the Christchurch attack, enacted a “one-strike” policy on its live-streaming service, in which users who violate certain serious policies on the platform will not be able use the service for a specified period of time. 

 

Twitter’s Public Policy Director stated at the hearing that it had suspended over one million accounts between August 2015 and December 2018 related to terrorist activity, with a little more than 370,000 of these terminations occurring in 2018.  Pickles affirmed that more than 90% of suspensions were a result of Twitter’s own efforts. 

 

YouTube (owned by Google) also touted its efforts in reducing these hateful and inflammatory posts.  In his testimony, Derek Slater said that more than 87% of the nine million videos YouTube removed in 2019 were initially flagged by their own infrastructure.    

 

The advances of communication have undoubtedly made terrorism, and the ease with which it can be inspired, a more pressing and significant challenge. Social media companies, as gatekeepers of the massive network of information across the world, have the responsibility to adequately monitor and manage dangerous statements, videos, and pictures that can easily result in many senseless deaths.   

 

The blame, however, should not and cannot fall on one side, and responsibility for the rise of terrorism falls on many shoulders. It is important to recognize that tightening of security measures on these platforms, while a step in the right direction, is akin to putting a band aid on a bullet wound. It does not address the root of the problem. Stemming the tide of nationalism, controlling access to harmful weapons, and electing leaders who do not fan the flames of extremist behavior are key aspects that need to be addressed to stop the terrorism epidemic. 

 

Keeping a close eye on Big Tech’s anti-terrorism efforts is at least a partial solution, and digital responsibility will certainly remain a thorn in the industry’s side for the foreseeable future.