Let’s take the fake out of our news!

The Fake News Highway - Image by John Iglar

By Kami Klein

There was a time when the news wasn’t so confusing.  Before the internet, most families had their morning newspaper delivered conveniently to their door. In order to keep your business or be competitive, newspapers battled over the facts and dug deeper to reach the truth per investigative journalism.  The stories would be presented without opinions but based on legitimate proof. Of course, just as internet news does today, a powerful headline didn’t hurt.  

Once the workday was winding down, the evening news of the day was given through well-respected television journalists such as Walter Cronkite and Tom Brokaw, who presented the unbiased facts, trusting in the abilities of their listeners to ponder and come to their own conclusions.  The news itself was taken quite seriously. The worst thing to happen to a reporter was to be proven or accused of dishonorable reporting. To be respected in the journalism field was the goal and not how many facebook followers or tweet responses happened in a day or whether they have stayed true to their personal beliefs. Becoming a journalist was a calling… not the way to fame.

Suddenly we have the internet highway where everyone can have an opinion, Competition requires all journalism to be the fastest news source which yields little time for investigation or vetting and by presenting a portion of the facts which in many cases is served to the public with a generous amount of opinion gravy poured on top. Conservative or Liberal, it is rare to find an unbiased news source. Add to this confusing issue the hot topic of “Fake News” and it is a wonder any of us really knows what is going on. 

Every day, in social media across the world, fake news is often more prevalent in our feed than those stories that are actually the truth or at least close to it.  These (articles) are spread by the misconception that if it is on the internet, it must be true, or because the story sits right in line with the personal beliefs of the reader, it must be correct.  The share button gets a hit, and the lie continues on its journey. Where we used to be able to hold the reporter or journalist accountable for their information, the responsibility is now ours. In an age where anyone can post a news story, how do we take the fake out of our news? 

There are several kinds of fake news on the internet.  The following information comes from a story written by MastersinCommunication.org.. Called “The Truth about Fake News”. It is important for us to be able to identify and beware of the following:    

 

  • Propaganda – News stories designed to disparage a candidate, promote a political cause, and mislead voters
  • Sloppy Journalism – Stories containing inaccurate information produced by writers and editors who have not properly vetted a story. Retractions do little to fix the problem, even if there is one since the story has spread and the damage done.
  • Sensationalized Headlines – Often a story may be accurate but comes with a misleading or outrageous headline. Readers may not read past it, but take everything they need to know from this skewed title.
  • Clickbait – These stories are deliberately created to create traffic on a website. Advertising dollars are at stake, and gullible readers fall for it by the millions.
  • Satire – Parody websites like The Onion and The Daily Mash produce satirical stories that are believed by uninformed readers. The stories are written as satire and not meant to be taken literally, but unless you check their website, not everyone will know. 
  • Average Joe Reporting – Sometimes a person will post an eyewitness report that goes viral, but it may or may not be true. The classic example of this was a tweet by Eric Tucker in Austin, Texas in 2015. Posting a picture of a row of charter busses, Tucker surmised and tweeted that Trump protesters were being bussed in to rally against the President-elect. The tweet was picked up by multiple media outlets, and Mr. Trump himself, going viral in a matter of hours. The only problem is, it wasn’t true.

 The 2020 elections are upon us and fake news will be used as a weapon.  False news can destroy lives and ultimately do great harm to our country. 

How do we beat these fakes and stop them?  Here are some tools available to anyone who does not want to be duped by those that are attempting to manipulate for power, creating greater discourse or for money. If we can all take responsibility for what we share, we are one step closer to legitimate news.   

HERE ARE QUICK TIPS FOR CHECKING LEGITIMACY OF A NEWS STORY

 

  1.  Pay attention to the domain and URL – many times these sites will make something very close to a trusted news source.  Sites with such endings like .com.co should make you raise your eyebrows and tip you off that you need to dig around more to see if they can be trusted. This is true even when the site looks professional and has semi-recognizable logos. For example, abcnews.com is a legitimate news source, but abcnews.com.co is not, despite its similar appearance.
  2. Read the “About Us” section- Most sites will have a lot of information about the news outlet, the company that runs it, members of leadership, and the mission and ethics statement behind an organization. The language used here is straightforward. If it’s melodramatic and seems overblown, you should be skeptical.  This is where satire sites will let you know that what you are reading is only meant for entertainment. The laugh is then on us when we take what they say as the truth. they are counting on you NOT to check. 
  3. HEADLINES CAN BE MISLEADING!! -Headlines are meant to get the reader’s attention, but they’re also supposed to accurately reflect what the story is about. In fake stories, headlines often will be written in an exaggerated language with the intention of being misleading.  These will then be attached to stories that give half-truth or the story proves that the headline would never or has not actually happened.  
  4. Fact-Checking can be your friend –  Not only is fact-checking smart, looking to see if your particular news story leans to conservative or liberal points of view is just as important. Mediabiasfactcheck.com is one of my go-to places.  They also provide a great list of fact-checking sites that are highly recommended. You will also find a wonderful list of news web sites that have been deemed as non-biased. 

  While Facebook and Twitter are being held accountable for much of what is on our social media today, they will only succeed with our help. Together, we can take the fake out of the news and make responsible choices for our future!  

 

Exclusive: Echo chambers – Fake news fact-checks hobbled by low reach, study shows

FILE PHOTO: A general view of Facebook's elections operation centre in Dublin, Ireland May 2, 2019. REUTERS/Lorraine O'Sullivan/File Photo

By Alissa de Carbonnel

BRUSSELS (Reuters) – The European Union has called on Facebook and other platforms to invest more in fact-checking, but a new study shows those efforts may rarely reach the communities worst affected by fake news.

The analysis by big-data firm Alto Data Analytics over a three-month period ahead of this year’s EU elections casts doubt on the effectiveness of fact-checking even though demand for it is growing.

Facebook has been under fire since Russia used it to influence the election that brought Donald Trump to power. The company quadrupled the number of fact-checking groups it works with worldwide over the last year and its subsidiary WhatsApp launched its first fact-checking service.

The EU, which has expanded its own fact-checking team, urged online platforms to take greater action or risk regulation.

Fact-checkers are often journalists who set up non-profits or work at mainstream media outlets to scour the web for viral falsehoods. Their rebuttals in the form of articles, blog posts and Tweets seek to explain how statements fail to hold up to scrutiny, images are doctored or videos are taken out of context.

But there is little independent research on their success in debunking fake news or prevent people from sharing it.

“The biggest problem is that we have very little data … on the efficacy of various fact-checking initiatives,” said Nahema Marchal, a researcher at the Oxford Internet Institute.

“We know from a research perspective that fact-checking isn’t always as efficient as we might think,” she said.

Alto looked at more than two dozen fact-checking groups in five EU nations and found they had a minimal online presence – making up between 0.1% and 0.3% of the total number of retweets, replies, and mentions analyzed on Twitter from December to March.

The Alto study points to a problem fact-checkers have long suspected: they are often preaching to the choir.

It found that online communities most likely to be exposed to junk news in Germany, France, Spain, Italy and Poland had little overlap with those sharing fact-checks.

PATCHWORK

The European Parliament election yielded a patchwork of results. The far-right made gains but so did liberal and green parties, leaving pro-European groups in control of the assembly.

The EU found no large-scale, cross-border attempts to sway voters but warned of hard-to-detect home-grown operations.

Alto analyzed abnormal, hyperactive users making dozens of posts per day to deduce which political communities were most tainted by suspect posts in each country.

Less than 1% of users – mostly sympathetic to populist and far-right parties – generated around 10% of the total posts related to politics.

They flooded networks with anti-immigration, anti-Islam and anti-establishment messages, Alto found in results that echoed separate studies by campaign group Avaaz and the Oxford Internet Institute on the run-up to the European election.

Fact-checkers, seeking to counter these messages, had little penetration in those same communities.

In Poland – where junk news made up 21% of traffic compared to an average of 4% circulating on Twitter in seven major European languages over one month before the vote, according to the Oxford study – content issued by fact-checkers was mainly shared among those opposed to the ruling Law and Justice party.

The most successful posts by six Polish fact-checkers scrutinized campaign finance, the murder of a prominent opposition politician and child abuse by the Catholic church.

Italy, where an anti-establishment government has been in power for a year, and Spain, where far-right newcomer Vox is challenging center parties, also saw content from fact-checkers unevenly spread across political communities.

More than half of the retweets, mentions or replies to posts shared by seven Italian fact-checking groups – mostly related to immigration – came from users sympathetic to the center-left Democratic Party (PD).

Only two of the seven groups had any relatively sizeable footprint among supporters of Deputy Prime Minister Matteo Salvini’s far-right League party, which surged to become the third-biggest in the new EU legislature.

Italian fact-checker Open.Online, for example, had 4,594 retweets, mentions or replies among PD sympathizers compared to 387 among League ones.

French fact-checking groups, who are mostly embedded in mainstream media, fared better. Their content, which largely sought to debunk falsehoods about President Emmanuel Macron, was the most evenly distributed across different online communities.

In Germany, only 2.2% of Twitter users mapped in the study retweeted, replied or mentioned the content distributed by six fact-checking groups.

Alto’s research faces constraints. The focus on publicly available Twitter data may not accurately reflect the whole online conversation across various platforms, the period of study stops short of the May elections, and there are areas of dispute over what constitutes disinformation.

It also lacks data from Facebook, which is not alone among internet platforms but whose dominance puts it in the spotlight.

FREE SPEECH

Facebook says once a post is flagged by fact-checkers, it is downgraded in users’ news feeds to limit its reach and if users try to share it, they will receive a warning. Repeat offenders will see a distribution of their entire page restricted resulting in a loss of advertising revenue.

“It should be seen less, shared less,” Richard Allen, Facebook’s vice president for global policy, told reporters visiting a “war room” in Dublin set up to safeguard the EU vote.

Facebook cites free speech concerns over deleting content. It will remove posts seeking to suppress voter turnout by advertising the wrong date for an election, for example, but says in many other cases it is difficult to differentiate between blatantly false information and partisan comment. 

“We don’t feel we should be removing contested claims even when we believe they may be false,” Allen said. “There are a lot of concepts being tested because we don’t know what is going to work.”

As the rapid spread of fake news on social media has raised the profile of fact-checking groups, it is forcing them to rethink how they work.

If they once focused on holding politicians to account, fact-checkers are now seeking to influence a wider audience.

Clara Jiménez, co-founder Maldita.es, a Spanish fact-checking group partnered with Facebook, mimics the methods used by those spreading false news. That means going viral with memes and videos.

Maldita.es focuses largely on WhatsApp and asks people to send fact-checks back to those in their networks who first spread the fake news.

“You need to try to reach real people,” said Jimenez, who also aims to promote better media literacy. “One of the things we have been asked several times is whether people can get pregnant from a mosquito bite. If people believe that, we have a bigger issue.”

(Additional reporting by Thomas Escritt in Berlin and Conor Humphries in Dublin; Writing by Alissa de Carbonnel; Editing by Giles Elgood)

Factbox: ‘Fake News’ laws around the world

Commuters walk past an advertisement discouraging the dissemination of fake news at a train station in Kuala Lumpur, Malaysia March 28, 2018. REUTERS/Stringer

SINGAPORE (Reuters) – Singapore’s parliament on Monday began considering a law on “fake news” that an internet watchdog has called the world’s most far-reaching, stoking fears the government could use additional powers to choke freedom of speech and chill dissent.

Governments and companies worldwide are increasingly worried about the spread of false information online and its impact on everything from share prices to elections and social unrest.

Human rights activists fear laws to curb so-called “fake news” could be abused to silence opposition.

Here are details of such laws around the world:

SINGAPORE

Singapore’s new law would require social media sites like Facebook to carry warnings on posts the government deems false and remove comments against the “public interest”.

Singapore, which ranks 151 among 180 countries rated by the World Press Freedom Index, defines “public interests” as threats to its security, foreign relations, electoral integrity and public perception of the government and state institutions.

Violations could attract fines of up to S$ 1 million ($737,500) and 10 years in prison.

RUSSIA

Last month, President Vladimir Putin signed into law tough new fines for Russians who spread what the authorities regard as fake news or who show “blatant disrespect” for the state online.

Critics have warned the law could aid state censorship, but lawmakers say it is needed to combat false news and abusive online comment.

Authorities may block websites that do not meet requests to remove inaccurate information. Individuals can be fined up to 400,000 rouble ($6,109.44) for circulating false information online that leads to a “mass violation of public order”.

FRANCE

France passed two anti-fake news laws last year, to rein in false information during election campaigns following allegations of Russian meddling in the 2017 presidential vote.

President Emmanuel Macron vowed to overhaul media laws to fight “fake news” on social media, despite criticism that the move was a risk to civil liberties.

GERMANY

Germany passed a law last year for social media companies, such as Facebook and Twitter, to quickly remove hate speech.

Called NetzDG for short, the law is the most ambitious effort by a Western democracy to control what appears on social media. It will enforce online Germany’s tough curbs on hate speech, including pro-Nazi ideology, by giving sites a 24-hour deadline to remove banned content or face fines of up to 50 million euros.

Since it was adopted, however, German officials have said too much online content was being blocked, and are weighing changes.

MALAYSIA

Malaysia’s ousted former government was among the first to adopt a law against fake news, which critics say was used to curb free speech ahead of last year’s general elections, which it lost. The measure was seen as a tool to fend off criticism over graft and mismanagement of funds by then prime minister Najib Razak, who now faces charges linked to a multibillion-dollar scandal at state fund 1 Malaysia Development Berhad.

The new government’s bid to deliver on an election promise to repeal the law was blocked by the opposition-led Senate, however.

EUROPEAN UNION

The European Union and authorities worldwide will have to regulate big technology and social media companies to protect citizens, European Commission deputy head Frans Timmermans said last month.

EU heads of state will urge governments to share information on threats via a new warning system, launched by the bloc’s executive. They will also call for online platforms to do more to remove misleading or illegal content.

Union-level efforts have been limited by different election rules in each member nation and qualms over how vigorously regulators can tackle misleading content online.

(Reporting by Fathin Ungku; Editing by Clarence Fernandez; and Joe Brock)

NewsGuard’s ‘real news’ seal of approval helps spark change in fake news era

Facebook CEO Mark Zuckerberg is surrounded by members of the media as he arrives to testify before a Senate Judiciary and Commerce Committees joint hearing regarding the company’s use and protection of user data, on Capitol Hill in Washington, U.S., April 10, 2018. REUTERS/Leah Millis TPX IMAGES OF THE DAY

By Kenneth Li

NEW YORK (Reuters) – More than 500 news websites have made changes to their standards or disclosures after getting feedback from NewsGuard, a startup that created a credibility ratings system for news on the internet, the company told Reuters this week.

The latest major news organization to work with the company is Britain’s Daily Mail, according to NewsGuard, which upgraded what it calls its “nutrition label” rating on the paper’s site to “green” on Thursday, indicating it “generally maintains basic standards of accuracy and accountability.”

A representative of the Daily Mail did not respond to several requests for comment.

NewsGuard markets itself as an independent arbiter of credible news. It was launched last year by co-chief executives Steven Brill, a veteran U.S. journalist who founded Brill’s Content and the American Lawyer, and Gordon Crovitz, a former publisher of News Corp’s Wall Street Journal.

NewsGuard joins a handful of other groups such as the Trust Project and the Journalism Trust Initiative which aim to help readers discern which sites are credible when many readers have trouble distinguishing fact from fiction.

After facing anger over the rapid spread of false news in the past year or so, Facebook Inc and other tech companies also say they have recruited more human fact checkers to identify and sift out some types of inaccurate articles.

These efforts were prompted at least in part by the 2016 U.S. presidential election when Facebook and other social media sites were used to disseminate many false news stories.

The company has been criticized by Breitbart News, a politically conservative site, which described NewsGuard as “the establishment media’s latest effort to blacklist alternative media sites.”

The way NewsGuard works is this: red or green shield-shaped labels are visible in a web browser window when looking at a news website if a user downloads NewsGuard’s software from the web. The software is free and works with the four leading browsers: Google’s Chrome, Microsoft Corp’s Edge, Mozilla’s Firefox and Apple Inc’s Safari.

‘CALL EVERYONE FOR COMMENT’

NewsGuard’s investors include the French advertising company Publicis Groupe SA and the non-profit Knight Foundation. Thomas Glocer, the former chief executive of Thomson Reuters, owns a smaller stake, according to NewsGuard’s website. News sites do not pay the company for its service.

The startup said it employs 35 journalists who have reviewed and published labels on about 2,200 sites based on nine journalistic criteria such as whether the site presents information responsibly, has a habit of correcting errors or discloses its ownership and who is in charge of the content.

News sites field questions if they choose to from NewsGuard journalists about its performance on the nine criteria.

“We call everyone for comment which algorithms don’t do,” Brill said in an interview, highlighting the difference between NewsGuard’s verification process with the computer code used by Alphabet Inc’s Google and Facebook in bringing new stories to the attention of users.

Some news organizations have clarified their ownership, financial backers and identity of their editorial staff after interacting with the company, NewsGuard said.

GateHouse Media, which publishes more than 140 local newspapers such as the Austin American-Statesman and Akron Beacon Journal, made changes to how it identifies sponsored content that may appear to be objective reporting but is actually advertising, after being contacted by NewsGuard.  

“We made our standards and practices more prominent and consistent across our digital 460 news brands across the country,” said Jeff Moriarty, GateHouse’s senior vice president of digital.

Reuters News, which earned a green rating on all nine of NewsGuard’s criteria, added the names and titles of its editorial leaders to the Reuters.com website after being contacted by NewsGuard, a Reuters spokesperson said.

NewsGuard upgraded the Daily Mail’s website rating on Thursday to green after giving it a red label in August, when it stated that the site “repeatedly publishes false information and has been forced to pay damages in numerous high-profile cases.”

The Daily Mail objected to that description, and started discussions with NewsGuard in January after the red label became visible for mobile users of Microsoft’s Edge browser, NewsGuard said.

NewsGuard has made public many details of its exchange with the Daily Mail on its website.

“We’re not in the business of trying to give people red marks,” Brill said. “The most common side effect of what we do is for news organizations to improve their journalistic practices.”

(Reporting by Kenneth Li; editing by Bill Rigby)

Facebook, Google to tackle spread of fake news, advisors want more

FILE PHOTO - Commuters walk past an advertisement discouraging the dissemination of fake news at a train station in Kuala Lumpur, Malaysia March 28, 2018. REUTERS/Stringer

By Foo Yun Chee

BRUSSELS (Reuters) – Facebook, Google, and other tech firms have agreed on a code of conduct to do more to tackle the spread of fake news, due to concerns it can influence elections, the European Commission said on Wednesday.

Intended to stave off more heavy-handed legislation, the voluntary code covers closer scrutiny of advertising on accounts and websites where fake news appears, and working with fact checkers to filter it out, the Commission said.

But a group of media advisors criticized the companies, also including Twitter and lobby groups for the advertising industry, for failing to present more concrete measures.

With EU parliamentary elections scheduled for May, Brussels is anxious to address the threat of foreign interference during campaigning. Belgium, Denmark, Estonia, Finland, Greece, Poland, Portugal, and Ukraine are also all due to hold national elections next year.

Russia has faced allegations – which it denies – of disseminating false information to influence the U.S. presidential election and Britain’s referendum on European Union membership in 2016, as well as Germany’s national election last year.

The Commission told the firms in April to draft a code of practice or face regulatory action over what it said was their failure to do enough to remove misleading or illegal content.

European Digital Commissioner Mariya Gabriel said on Wednesday that Facebook, Google, Twitter, Mozilla, and advertising groups – which she did not name – had responded with several measures.

“The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and …we welcome this,” she said in a statement.

The steps also include rejecting payment from sites that spread fake news, helping users understand why they have been targeted by specific ads, and distinguishing ads from editorial content.

But the advisory group criticized the code, saying the companies had not offered measurable objectives to monitor its implementation.

“The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” the group said, giving no further details.

Its members include the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics.

(Reporting by Foo Yun Chee; editing by Philip Blenkinsop and John Stonestreet)

Russian ‘fake news’ machine going mad, says French envoy to U.S.

FILE PHOTO: French Ambassador to the U.N. Gerard Araud addresses the Security Council during a meeting about the situation in the Middle East, including Palestine, at United Nations headquarters in New York, July 22, 2014. REUTERS/Eduardo Munoz

PARIS (Reuters) – France’s envoy to the United States on Tuesday accused Moscow of spreading fake news after Russia’s Defence Ministry said a French frigate in the Mediterranean had launched missiles on Syria.

The ministry initially said a Russian military plane had been shot down by Israeli warplanes and that Russian air control radar systems had detected rocket launches from the French frigate Auvergne.

The ministry later said the aircraft had been shot down by Syrian anti-aircraft systems in what President Vladimir Putin said was the result of tragic and chance circumstances.

“Russian fake news machine getting mad: accusing the French to have shot down a Russian plane (in fact victim of a Syrian « friend(ly) » fire),” France’s ambassador to Washington, Gerard Araud tweeted, in English.

French army spokesman Patrik Steiger denied that France had been involved in the incident or fired any missiles but several hours later Russian media continued to ask the question.

Quoting a military expert, Tass news agency said Paris was partly at fault after launching cruise missiles from the Auvergne.

France’s presidency, Foreign Ministry and Defence Ministry had yet to respond officially to the Russian assertions.

(Reporting by John Irish; Editing by Leigh Thomas)

Majority of Americans think social media platforms censor political views: Pew survey

FILE PHOTO: A young couple look at their phone as they sit on a hillside after sun set in El Paso, Texas, U.S., June 20, 2018. REUTERS/Mike Blake

By Angela Moon

NEW YORK (Reuters) – About seven out of ten Americans think social media platforms intentionally censor political viewpoints, the Pew Research Center found in a study released on Thursday.

The study comes amid an ongoing debate over the power of digital technology companies and the way they do business. Social media companies in particular, including Facebook Inc and Alphabet Inc’s Google, have recently come under scrutiny for failing to promptly tackle the problem of fake news as more Americans consume news on their platforms.

In the study of 4,594 U.S. adults, conducted between May 29 and June 11, roughly 72 percent of the respondents believed that social media platforms actively censored political views those companies found objectionable.

The perception that technology companies were politically biased and suppressed political speech was especially widespread among Republicans, the study showed.

About 85 percent of Republicans and Republican-leaning independents in the survey thought it was likely for social media sites to intentionally censor political viewpoints, with 54 percent saying it was “very” likely.

Sixty-four percent of Republicans also thought major technology companies as a whole supported the views of liberals over conservatives.

A majority of the respondents, or 51 percent, said technology companies should be regulated more than they are now, while only 9 percent said they should be regulated less.

(Reporting by Angela Moon; Editing by Bernadette Baum)

When a text can trigger a lynching: WhatsApp struggles with incendiary messages in India

Satish Bhaykre, 21, who was beaten by a mob due to a fake WhatsApp text, poses inside his house on the outskirts of Nagpur, India, June 23, 2018. Picture taken June 23, 2018. REUTERS/Stringer

By Sankalp Phartiyal, Subrat Patnaik and David Ingram

MUMBAI (Reuters) – A WhatsApp text circulating in some districts of India’s central Madhya Pradesh state helped to inflame a mob of 50-60 villagers into savagely beating up two innocent men last week on suspicion that they were going to murder people and sell their body parts.

The essence of the message, written in Hindi, was that 500 people disguised as beggars were roaming the area so that they could kill people to harvest their organs. The message also urged recipients to forward it to friends and family. Police say the message was fake.

Police officers who joined several local WhatsApp groups, found three men circulating the message and they were arrested, said Jayadevan A, the police chief for Balaghat district, where the incident occurred.

This happened just weeks after a WhatsApp text warning of 400 child traffickers arriving in the southern Indian technology hub of Bengaluru led a frenzied mob to lynch a 26-year-old man, a migrant construction worker from another Indian state, on suspicions that he was a kidnapper. He was attacked while he was just walking on the road.

So far this year, false messages about child abductors on Facebook Inc <FB.O>-owned WhatsApp have helped to trigger mass beatings of more than a dozen people in India – at least three of whom have died. In addition, fake messages about child snatchers on Facebook, as well as some texts on WhatsApp, also led to the lynching of two men in eastern India earlier this month.

WHATSAPP’S BIGGEST MARKET

With more than 200 million users in India, WhatsApp’s biggest market in the world, false news and videos circulating on the messaging app have become a new headache for social media giant Facebook, already grappling with a privacy scandal.

In India, a country with over a billion phone subscribers with access to cheap mobile data, false news messages and videos can instantly go viral, creating mass hysteria and stirring up communal tensions.

Those tensions can be high between the Majority Hindu community and the minority Muslim population but also within the rigid Hindu caste hierarchy where the so-called Dalits at the bottom of the pyramid have faced attacks for trying to improve their position in society.

In 2017, at least 111 people were killed and 2,384 injured in 822 communal incidents in the country, according to the federal home affairs ministry. It is unclear whether any of these incidents were triggered by fake news messages.

WhatsApp said it is aware of the incidents in India through media coverage.

“Sadly some people also use WhatsApp to spread harmful misinformation,” WhatsApp said in a statement. “We’re stepping up our education efforts so that people know about our safety features and how to spot fake news and hoaxes.”

Group texts, where fake news spreads most easily, are still a minority: 90 percent of messages are between two people, and the average group size is six people, according to the messaging platform.

WhatsApp also said it is considering changes to the service. For example, there is now a public beta test that is labeling any forwarded message.

The company is not planning any changes to its encryption, which ensures messages are not read by anyone except the sender and the recipient.

Facebook did not respond to a request for comment.

Two senior Indian government officials told Reuters that New Delhi had engaged with WhatsApp on the issue but they are not allowed to discuss the matter publicly. WhatsApp declined to comment on possible contact with Indian government officials.

Indian ministries of IT, home affairs and information and broadcasting did not respond to requests for comment.

PRIVACY CONCERNS

A deluge of hoax news incidents, several with fatal consequences, may bolster the Indian government’s attempts to get social networks to share more user data so that police can track down those spreading rumors. That concerns privacy advocates who fear the authorities will use such access against activists and political opponents, and not just against those spreading malicious information.

“Government restrictions on dissemination of false news are too often an attempt to shroud government intentions of restricting freedom of expression and criticism,” according to David Kaye, United Nations Special Rapporteur on the Right to Freedom of Opinion and Expression.

India’s Ministry of Information and Broadcasting has also recently floated a tender for a firm to scrutinize social media posts of Indian users and identify fake news.

The Indian authorities have been signaling they will take an increasingly harsh line with foreign companies who are providing Internet services in India.

India’s central bank in April issued a directive compelling all payments firms operating in the country to store payments data locally within six months for “unfettered supervisory access”. Separately, Prime Minister Narendra Modi’s government is working on a data protection law that could force all foreign tech firms to store key Indian user data locally.

“There is a distinct link between fake news and laws being proposed undermining privacy,” said Apar Gupta, a co-founder of advocacy Internet Freedom Foundation.

Meanwhile, the inflammatory hoax news messages keep coming.

One floating in Bengaluru last month warned parents to take “extra measures towards the safety” of children during the Muslim holy month of Ramadan as they remain busy with prayers and shopping.

More than 500 kidnappers have entered the southern state of Karnataka from western Rajasthan state and the cities of Chennai and Hyderabad, the message said.

WhatsApp messages on organ thieves or child abductions are just the tip of the iceberg though – fake reports can range from incorrect medical advice to news about top jobs.

A recent message circulating in India’s northeast starts by saying the deadly brain-damaging Nipah virus has arrived in Shillong city and advises parents to keep children away from lychees, a popular summer fruit. No confirmed cases of Nipah have been found yet outside of southern Kerala state.

(Reporting by Sankalp Phartiyal, Subrat Patnaik and David Ingram; additional reporting by Nidhi Verma in New Delhi, Derek Francis and Sangameswaran S in Bengaluru; Edited by Martin Howell)

CEO Zuckerberg says Facebook could have done more to prevent misuse

FILE PHOTO: Facebook CEO Mark Zuckerberg speaks on stage during the Facebook F8 conference in San Francisco, California, U.S., April 12, 2016. REUTERS/Stephen Lam/File Photo

By Dustin Volz and David Shepardson

WASHINGTON (Reuters) – Facebook Inc Chief Executive Mark Zuckerberg told Congress on Monday that the social media network should have done more to prevent itself and its members’ data being misused and offered a broad apology to lawmakers.

His conciliatory tone precedes two days of Congressional hearings where Zuckerberg is set to answer questions about Facebook user data being improperly appropriated by a political consultancy and the role the network played in the U.S. 2016 election.

“We didn’t take a broad enough view of our responsibility, and that was a big mistake,” he said in remarks released by the U.S. House Energy and Commerce Committee on Monday. “It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.”

Zuckerberg, surrounded by tight security and wearing a dark suit and a purple tie rather than his trademark hoodie, was meeting with lawmakers on Capitol Hill on Monday ahead of his scheduled appearance before two Congressional committees on Tuesday and Wednesday.

Zuckerberg did not respond to questions as he entered and left a meeting with Senator Bill Nelson, the top Democrat on the Senate Commerce Committee. He is expected to meet Senator John Thune, the Commerce Committee’s Republican chairman, later in the day, among others.

Top of the agenda in the forthcoming hearings will be Facebook’s admission that the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica.

But lawmakers are also expected to press him on a range of issues, including the 2016 election.

“It’s clear now that we didn’t do enough to prevent these tools from being used for harm…” his testimony continued. “That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.”

Facebook, which has 2.1 billion monthly active users worldwide, said on Sunday it plans to begin on Monday telling users whose data may have been shared with Cambridge Analytica. The company’s data practices are under investigation by the U.S. Federal Trade Commission.

London-based Cambridge Analytica, which counts U.S. President Donald Trump’s 2016 campaign among its past clients, has disputed Facebook’s estimate of the number of affected users.

Zuckerberg also said that Facebook’s major investments in security “will significantly impact our profitability going forward.” Facebook shares were up 2 percent in midday trading.

ONLINE INFORMATION WARFARE

Facebook has about 15,000 people working on security and content review, rising to more than 20,000 by the end of 2018, Zuckerberg’s testimony said. “Protecting our community is more important than maximizing our profits,” he said.

As with other Silicon Valley companies, Facebook has been resistant to new laws governing its business, but on Friday it backed proposed legislation requiring social media sites to disclose the identities of buyers of online political campaign ads and introduced a new verification process for people buying “issue” ads, which do not endorse any candidate but have been used to exploit divisive subjects such as gun laws or police shootings.

The steps are designed to deter online information warfare and election meddling that U.S. authorities have accused Russia of pursuing, Zuckerberg said on Friday. Moscow has denied the allegations.

Zuckerberg’s testimony said the company was “too slow to spot and respond to Russian interference, and we’re working hard to get better.”

He vowed to make improvements, adding it would take time, but said he was “committed to getting it right.”

A Facebook official confirmed that the company had hired a team from the law firm WilmerHale and outside consultants to help prepare Zuckerberg for his testimony and how lawmakers may question him.

(Reporting by David Shepardson and Dustin Volz; Editing by Bill Rigby)

India drops plan to punish journalists for “fake news” following outcry

FILE PHOTO: Television journalists report from the premises of India's Parliament in New Delhi, India, February 13, 2014. REUTERS/Adnan Abidi/File Photo

By Manoj Kumar

NEW DELHI (Reuters) – Indian Prime Minister Narendra Modi on Tuesday ordered the withdrawal of rules punishing journalists held responsible for distributing “fake news”, giving no reason for the change, less than 24 hours after the original announcement.

The move followed an outcry by journalists and opposition politicians that the rules represented an attack on the freedom of the press and an effort by Modi’s government to rein in free speech ahead of a general election due by next year.

Late on Monday, the Information and Broadcasting Ministry had said the government would cancel its accreditation of journalists who peddled “fake news”.

After Modi’s intervention, the ministry announced the withdrawal in a one-line statement.

Journalists said they welcomed the withdrawal but could not rule out the possibility that it was a “trial balloon” to test the waters for putting more restrictions on the press.

“A government fiat restraining the fourth pillar of our democracy is not the solution,” a statement issued by the Press Club said.

Co-opted by U.S. President Donald Trump, the term “fake news” has quickly become part of the standard repertoire of leaders in authoritarian countries to describe media reports and organisations critical of them.

Welcoming the change of heart, media groups in India nevertheless cautioned the government against changing its mind.

“The government has no mandate to control the press,” Gautam Lahiri, president of the Press Club of India, told journalists.

The events in India followed Malaysia’s approval this week of a law carrying jail terms of up to six years for spreading “fake news”.

Other countries in Southeast Asia, including Singapore and the Philippines, are considering how to tackle “fake news” but human rights activists fear laws against it could be used to stifle free speech.

India slipped three places last year to rank 136 among 180 countries rated in the world press freedom index of the watchdog Reporters Without Borders.

The non-profit body said Hindu nationalists, on the rise since Modi’s Bharatiya Janata Party swept to power in 2014, were “trying to purge all manifestations of anti-national thought”.

(Reporting by Manoj Kumar; Editing by Raju Gopalakrishnan and Nick Macfie)