Twitter on Monday began testing "buy buttons" that let people make purchases directly from marketing posts fired off at the globally popular one-to-many messaging service.
WASHINGTON – False news stories spread much more quickly and widely on Twitter than truthful ones, an imbalance driven more by people than automated "bot" accounts, researchers said on Thursday.
A study by researchers at the Massachusetts Institute of Technology&39;s Media Lab examining about 126,000 stories shared by some 3 million people on Twitter from 2006 to 2017 found that false news was about 70 percent more likely to be retweeted by people than true news.
The study, published in the journal Science, was one of the most comprehensive efforts to date to assess the dynamics behind how false news circulates on social media.
Twitter and other social media companies such as Facebook have been under scrutiny by U.S. lawmakers and international regulators for doing too little to prevent the spread of false content. U.S. officials have accused Russia of using social media to try to sow discord in the United States and interfere in the 2016 U.S. presidential election.
The stories examined in the study were reviewed by six independent fact-checking organizations including Snopes and Politifact to assess their veracity.
False stories spread significantly more quickly and broadly than true stories in all categories of information, but this was more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends or financial information, the researchers said.
They noted increases in false political stories during the 2012 and 2016 U.S. presidential races.
Though Twitter&39;s allowance of bots has come under particular criticism, the MIT researchers found these automated accounts accelerated true and false news equally, meaning people were more directly responsible for the spread of false news.
MIT Media Lab researcher and study lead author Soroush Vosoughi said people may be more likely to share false news because it is more surprising, the same way that sensationalized "click bait" headlines garner more attention.
"One reason false news might be more surprising is, it goes against people&39;s expectations of the world," Vosoughi said in an interview. "If someone makes up a rumor that goes against what they expected, you are more likely to pass it forward."
While the study focused on Twitter, the researchers said their findings likely also would apply to other social media platforms including Facebook.
A Twitter spokeswoman declined to comment on the study&39;s findings, but pointed to tweets by company CEO Jack Dorsey last week pledging to "increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress."
Twitter provided funding and some data access to support the study, which was published in the journal Science.
The study&39;s findings faulting humans more than bots for sharing false news surprised the researchers, who said they next may look for ways to help people cut down on the sharing of false stories.
"Let&39;s not take it as our destiny," said Deb Roy, another of the researchers, "that we have entered into the post-truth world from which we will not emerge."