How Twitter And Facebook Dealt With Fake News This Election Season

This week, Twitter has been particularly aggressive in its labeling of posts that contain misinformation, including those from the president.

By Shelly BrisbinNovember 5, 2020 10:52 am,

In 2016, Facebook drew sharp criticism for allowing political misinformation, or fake news, to spread on its platforms. Facebook and Twitter officials have said repeatedly that they are clamping down on news coming from questionable sources and lies spread by politicians.

Since Election Day, Twitter has labeled tweets from President Donald Trump and members of his inner-circle as inaccurate. Still, both platforms have been criticized for not doing enough – and for doing too much. Tech expert Omar Gallaga told Texas Standard that social media platforms announced in advance of the 2020 election that they would take action against misinformation when it conflicted with their stated posting policies. 

Posts that contain misinformation, like those from Trump, his press secretary and a Democratic official in Wisconsin that falsely claimed victory in the presidential election before all votes were counted, have been labeled as misleading by platforms. Twitter, Gallaga said, has been especially aggressive in adding misinformation labels.

What you’ll hear in this segment:

– How Facebook, Twitter and YouTube’s approach misinformation flagging differently

– How supporters of the president have tried to circumvent misinformation labeling

– What techniques social platforms use to moderate content

If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on and Thanks for donating today.