Social Media censors Congresswoman Greene

Romans 1:18 “The wrath of God is being revealed from heaven against all the godlessness and wickedness of people, who suppress the truth by their wickedness”

Important Takeaways:

  • Marjorie Taylor Greene Says Facebook Censoring Her After Twitter Ban
  • “Facebook has joined Twitter in censoring me. This is beyond censorship of speech,” Greene said on social media platform Gettr.
  • In her social media post, Greene suggested that Facebook blocked her account because she made a post regarding COVID-19 vaccine numbers provided by the Centers for Disease Control and Prevention’s
  • “Who appointed Twitter and Facebook to be the authorities of information and misinformation? When Big Tech decides what political speech of elected Members is accepted and what’s not then they are working against our government and against the interest of our people,” Greene added.

Read the original article by clicking here.

Big Tech CEOs told ‘time for self-regulation is over’ by U.S. lawmakers

By Diane Bartz and Elizabeth Culliford

WASHINGTON (Reuters) – The chief executives of Facebook, Google and Twitter appeared before Congress on Thursday to answer questions about extremism and misinformation on their services in their first appearances since rioters assaulted the U.S. Capitol on Jan. 6.

Facebook Inc Chief Executive Mark Zuckerberg; Sundar Pichai, chief executive of Google parent Alphabet Inc; and Twitter Inc CEO Jack Dorsey are testifying before the joint hearing by two subcommittees of the House Energy and Commerce Committee.

Lawmakers began the hearing by criticizing the social media platforms for their role in the riot and in the spread of COVID-19 vaccine misinformation, as well as concerns about children’s mental health.

“You failed to meaningfully change after your platform has played a role in fomenting insurrection and abetting the spread of the virus and trampling American civil liberties,” said Democratic Representative Frank Pallone, chair of the Energy and Commerce committee.

“Your business model itself has become the problem and the time for self-regulation is over. It’s time we legislate to hold you accountable,” he added.

Some lawmakers are calling for Section 230 of the Communications Decency Act, which shields online platforms from liability over user content, to be scrapped or rejigged. There are several pieces of legislation from Democrats to reform Section 230 that are doing the rounds in Congress, though progress has been slow. Several Republican lawmakers have also been pushing separately to scrap the law entirely.

In written testimony released on Wednesday, Facebook argued that Section 230 should be redone to allow companies immunity from liability for what users put on their platforms only if they follow best practices for removing damaging material.

Facebook’s Zuckerberg said polarization in the country was not the fault of social media: “I believe that the division we see today is primarily the result of a political and media environment that drives Americans apart.”

Republicans on the panel also criticized the tech giants for what they see as efforts to stifle conservative voices.

Former President Donald Trump was banned by Twitter over inciting violence around Jan. 6, while Facebook has asked its independent oversight board to rule on whether to bar him permanently. He is still suspended from YouTube.

The three CEOs have all appeared in front of Congress before, with Facebook’s Zuckerberg clocking up seven appearances since 2018.

Lawmakers’ scrutiny of misinformation on major online platforms intensified after U.S. intelligence agencies said Russia used them to interfere in the 2016 presidential election.

(Reporting by Diane Bartz in Washington and Elizabeth Culliford in New York; Additional reporting by Nandita Bose in Washington; Editing by Sonya Hepinstall and Lisa Shumaker)

Fake news makes disease outbreaks worse, study finds

By Kate Kelland

LONDON (Reuters) – The rise of “fake news” – including misinformation and inaccurate advice on social media – could make disease outbreaks such as the COVID-19 coronavirus epidemic currently spreading in China worse, according to research published on Friday.

In an analysis of how the spread of misinformation affects the spread of disease, scientists at Britain’s East Anglia University (UEA) said any successful efforts to stop people sharing fake news could help save lives.

“When it comes to COVID-19, there has been a lot of speculation, misinformation and fake news circulating on the internet – about how the virus originated, what causes it and how it is spread,” said Paul Hunter, a UEA professor of medicine who co-led the study.

“Misinformation means that bad advice can circulate very quickly – and it can change human behavior to take greater risks,” he added.

In their research, Hunter’s team focused on three other infectious diseases – flu, monkeypox and norovirus – but said their findings could also be useful for dealing with the COVID-19 coronavirus outbreak.

“Fake news is manufactured with no respect for accuracy, and is often based on conspiracy theories,” Hunter said.

For the studies – published on Friday in separate peer-reviewed journals – the researchers created theoretical simulations of outbreaks of norovirus, flu and monkeypox.

Their models took into account studies of real behavior, how different diseases are spread, incubation periods and recovery times, and the speed and frequency of social media posting and real-life information sharing.

They also took into account how lower trust in authorities is linked to tendency to believe conspiracies, how people interact in “information bubbles” online, and the fact that “worryingly, people are more likely to share bad advice on social media than good advice from trusted sources,” Hunter said.

The researchers found that a 10% reduction in the amount of harmful advice being circulated has a mitigating impact on the severity of an outbreak, while making 20% of a population unable to share harmful advice has the same positive effect.

(Reporting by Kate Kelland; Editing by Frances Kerry)

Mass shooting rumor in Facebook Group shows private chats are not risk-free

By Bryan Pietsch

WASHINGTON (Reuters) – Ahead of the annual Blueberry Festival in Marshall County, Indiana, in early September, a woman broadcast a warning to her neighbors on Facebook.

“I just heard there’s supposed to be a mass shooting tonight at the fireworks,” the woman, whose name is held to protect her privacy, said in a post in a private Facebook Group with over 5,000 members. “Probably just a rumor or kids trying to scare people, but everyone keep their eyes open,” she said in the post, which was later deleted.

There was no shooting at the Blueberry Festival that night, and the local police said there was no threat.

But the post sparked fear in the community, with some group members canceling their plans to attend, and shows the power of rumors in Facebook Groups, which are often private or closed to outsiders. Groups allow community members to quickly spread information, and possibly misinformation, to users who trust the word of their neighbors.

These groups and other private features, rather than public feeds, are “the future” of social media, Facebook Inc <FB.O> Chief Executive Mark Zuckerberg said in April, revealing their importance to Facebook’s business model.

The threat of misinformation spreading rapidly in Groups shows a potential vulnerability in a key part of the company’s growth strategy. It could push Facebook to invest in expensive human content monitoring at the risk of limiting the ability to post in real time, a central benefit of Groups and Facebook in general that has attracted millions of users to the platform.

When asked if Facebook takes accountability for situations like the one in Indiana, a company spokeswoman said it is committed to maintaining groups as a safe place, and that it encourages people to contact law enforcement if they see a potential threat.

Facebook Groups can also serve as a tool for connecting social communities around the world, such as ethnic groups, university alumni and hobbyists.

Facebook’s WhatsApp messaging platform faced similar but more serious problems in 2018 after false messages about child abductors led to mass beatings of more than a dozen people in India, some of whom have died. WhatsApp later limited message forwards and began labeling forwarded messages to quell the risk of fake news.

FIREWORKS FEAR

The Blueberry Festival post caused chaos in the group, named “Local News Now 2…(Marshall and all surrounding Counties).”

In another post, which garnered over 100 comments of confusion and worry, a different member urged the woman to report the threat to the police. “This isn’t something to joke about or take lightly,” she wrote.

The author of the original post did not respond to repeated requests for comment.

Facebook’s policy is to remove language that “incites or facilitates serious violence,” the company spokeswoman said, adding that it did not remove the post and that it did not violate Facebook’s policies because there “was no threat, praise or support of violence.”

Cheryl Siddall, the founder of the Indiana group, said she would welcome tools from Facebook to give her greater “control” over what people post in the group, such as alerts to page moderators if posts contain certain words or phrases.

But Siddall said, “I’m sorry, but that’s a full-time job to sit and monitor everything that’s going on in the page.”

A Facebook spokeswoman said page administrators have the ability to remove a post if it violates the group’s proprietary rules and that administrators can pre-approve individual posts, as well as turn on post approvals for individual group members.

In a post to its blog, Facebook urged administrators to write “great group rules” to “set the tone for your group and help prevent member conflict,” as well as “provide a feeling of safety for group members.”

David Bacon, chief of police for the Plymouth Police Department in Marshall County, said the threat was investigated and traced back to an exaggerated rumor from children. Nonetheless, he said the post to the Facebook group is “what caused the whole problem.”

“One post grows and people see it, and they take it as the gospel, when in actuality you can throw anything you want out there,” Bacon said.

(Reporting by Bryan Pietsch; Editing by Chris Sanders)

Instagram adds tool for users to flag false information

SAN FRANCISCO (Reuters) – Instagram is adding an option for users to report posts they think are false, the company announced on Thursday, as the Facebook-owned photo-sharing site tries to stem misinformation and other abuses on its platform.

Posting false information is not banned on any of Facebook’s suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.

Facebook started using image-detection on Instagram in May to find content debunked on its flagship app and also expanded its third-party fact-checking program to the app.

Results rated as false are removed from places where users seek out new content, like Instagram’s Explore tab and hashtag search results.

Facebook has 54 fact-checking partners working in 42 languages, but the program on Instagram is only being rolled out in the United States.

“This is an initial step as we work toward a more comprehensive approach to tackling misinformation,” said Stephanie Otway, a Facebook company spokeswoman.

Instagram has largely been spared the scrutiny associated with its parent company, which is in the crosshairs of regulators over alleged Russian attempts to spread misinformation around the 2016 U.S. presidential election.

But an independent report commissioned by the Senate Select Committee on Intelligence found that it was “perhaps the most effective platform” for Russian actors trying to spread false information since the election.

Russian operatives appeared to shift much of their activity to Instagram, where engagement outperformed Facebook, wrote researchers at New Knowledge, which conducted the analysis.

“Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” they said.

It has also come under pressure to block health hoaxes, including posts trying to dissuade people from getting vaccinated.

Last month, UK-based charity Full Fact, one of Facebook’s fact-checking partners, called on the company to provide more data on how flagged content is shared over time, expressing concerns over the effectiveness of the program.

(Reporting by Elizabeth Culliford and Katie Paul; Editing by Cynthia Osterman)

Facebook fakers get better at covering tracks, security experts say

FILE PHOTO: People are silhouetted as they pose with mobile devices in front of a screen projected with a Facebook logo, in this picture illustration taken in Zenica, October 29, 2014. REUTERS/Dado Ruvic/File Photo

By Christopher Bing

WASHINGTON (Reuters) – Creators of fake accounts and news pages on Facebook are learning from their past mistakes and making themselves harder to track and identify, posing new challenges in preventing the platform from being used for political misinformation, cybersecurity experts say.

This was apparent as Facebook tried to determine who created pages it said were aimed at sowing dissension among U.S. voters ahead of congressional elections in November. The company said on Tuesday it had removed 32 fake pages and accounts from Facebook and Instagram involved in what it called “coordinated inauthentic behavior.”

While the United States improves its efforts to monitor and root out such intrusions, the intruders keep getting better at it, said cyber security experts interviewed over the past two days.

Ben Nimmo, a senior fellow at the Washington-based Digital Forensic Research Lab, said he had noticed the latest pages used less original language, rather cribbing from copy already on the internet.

“Linguistic mistakes would give them away before, between 2014 and 2017,” Nimmo told Reuters. “In some of these newer cases it seems they’ve caught on to that by writing less (original material) when posting things. With their longer posts sometimes it’s just pirated, copy and pasted from some American website. That makes them less suspicious.”

Facebook’s prior announcement on the topic of fake accounts, in April, directly connected a Russian group known as the Internet Research Agency to a myriad of posts, events and propaganda that were placed on Facebook leading up to the 2016 U.S. presidential election.

This time, Facebook did not identify the source of the misinformation.

“It’s clear that whoever set up these accounts went to much greater lengths to obscure their true identities than the Russian-based Internet Research Agency (IRA) has in the past,” the company said in a blog post on Tuesday announcing the removal of the pages. “Our technical forensics are insufficient to provide high confidence attribution at this time.”

Facebook said it had shared evidence connected to the latest flagged posts with several private sector partners, including the Digital Forensic Research Lab, an organization founded by the Atlantic Council, a Washington think tank.

Facebook also said the use of virtual private networks, internet phone services, and domestic currency to pay for advertisements helped obfuscate the source of the accounts and pages. The perpetrators also used a third party, which Facebook declined to name, to post content.

Facebook declined to comment further, referring back to its blog post.

U.S. President Donald Trump’s top national security aides said on Thursday that Russia is behind “pervasive” attempts to interfere in November’s elections and that they expect attempts by Russia, and others, will continue into the 2020 elections.

They say they are concerned that attempts will be made to foment confusion and anger among various political groups in the United States and cause a distrust of the electoral process.

Two U.S. intelligence officials who requested anonymity told Reuters this week there was insufficient evidence to conclude that Russia was behind the latest Facebook campaign. However, one said, “the similarities, aims and methodology relative to the 2016 Russian campaign are quite striking.”

‘PREVIOUS MISTAKES’

Experts who track online disinformation campaigns said the groups who launch such efforts have changed how they post content and create posts.

“These actors are learning from previous mistakes,” said John Kelly, chief executive of social media intelligence firm Graphika, adding they do not use the same internet addresses or pay in foreign currency.

“And as more players in the world learn these dark arts, it’s easier for them to hide among the multiple actors deploying the same playbook,” he said.

Philip Howard, an Oxford University professor of internet studies and director of the Oxford Internet Institute, said that suspicious social media accounts like those taken down this week were once more easily identifiable because they shared the same information from high-profile publications like RT, the Russian English-language news service, or Breitbart News Network.

But now, the content they often share is more diverse and less discernible, coming from lesser known sites, including internet forums that mix political news with other topics, he said.

“The junk news they’re sharing is using better quality images, for example, more believable domains, less-known websites, smaller blogs,” Howard added.

U.S. intelligence agencies have concluded that Russia meddled in the 2016 presidential campaign using tactics including fake Facebook accounts. The Internet Research Agency was one of three Russian companies charged in February by U.S. Special Counsel Robert Mueller with conspiracy to tamper with the 2016 election.

Moscow has denied any election interference.

(Reporting by Christopher Bing in Washington; Additional reporting by John Walcott; Editing by Damon Darlin and Frances Kerry)

Facebook to emphasize friends, not news, in series of changes

Facebook Founder and CEO Mark Zuckerberg speaks on stage during the annual Facebook F8 developers conference in San Jose, California, U.S., April 18, 2017.

By David Ingram and Paul Sandle

SAN FRANCISCO/LONDON (Reuters) – Facebook Inc on Thursday began to change the way it filters posts and videos on its centerpiece News Feed, the start of what Chief Executive Mark Zuckerberg said would be a series of changes in the design of the world’s largest social network.

Zuckerberg, in a sweeping post on Facebook, said the company would change the filter for the News Feed to prioritize what friends and family share, while reducing the amount of non-advertising content from publishers and brands.

Facebook, which owns four of the world’s most popular smartphone apps including Instagram, has for years prioritized material that its complex computer algorithms think people will engage with through comments, “likes” or other ways of showing interest.

Zuckerberg, the company’s 33-year-old co-founder, said that would no longer be the goal.

“I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions,” Zuckerberg wrote.

The shift was likely to mean that the time people spend on Facebook and some measures of engagement would go down in the short term, he wrote, but he added it would be better for users and for the business over the long term.

Advertising on the social network would be unaffected by the changes, John Hegeman, a Facebook vice president, said in an interview.

Facebook and its social media competitors have been inundated by criticism that their products reinforce users’ views on social and political issues and lead to addictive viewing habits, raising questions about possible regulation and the businesses’ long-term viability.

The company has been criticized for algorithms that may have prioritized misleading news and misinformation in people’s feeds, influencing the 2016 American presidential election, as well as political discourse in many countries.

Last year, Facebook disclosed that Russian agents had used the network to spread inflammatory posts to polarize the American electorate.

Congress is expected to hold more hearings this month, questioning the role social media platforms like Facebook, Twitter Inc &lt;TWTR.N&gt; and Alphabet Inc’s &lt;GOOGL.O&gt; YouTube play in spreading propaganda.

Zuckerberg said an overhaul of the company’s products, beginning with changes to the algorithms that control the News Feed, would help to address those concerns. Similar changes will be made to other products in the coming months, he said.

“We feel a responsibility to make sure our services aren’t just fun to use, but also good for people’s well-being,” Zuckerberg wrote. (http://bit.ly/2CSkTW6)

With more than 2 billion monthly users, Facebook is the world’s largest social media network. It is also among the world’s largest corporations, reporting $36 billion in revenue, mostly from advertising, during the 12 months that ended on Sept. 30.

A shift away from non-ad content produced by businesses is a potentially severe blow to news organizations, many of which use Facebook to drive readership, but Zuckerberg said many such posts have been unhealthy.

“Some news helps start conversations on important issues. But too often today, watching video, reading news or getting a page update is just a passive experience,” he wrote.

(Reporting by David Ingram in San Francisco and Paul Sandle in London; Editing by Sandra Maler and Lisa Shumaker)