Pastor Jack Hibbs Censored off YouTube

Matthew 5:10 “Blessed are those who are persecuted for righteousness’ sake, for theirs is the kingdom of heaven.

Important Takeaways:

  • Calvary Chapel Chino Hills Pastor Jack Hibbs, took to Twitter to post an urgent message to his congregants and followers. In his video he starts, “It has happened…they don’t want Bible and they don’t want truth so they nailed us.”
  • The reason why? They were told copyright infringement violations. Hibbs says officials at YouTube can’t show or tell them what those infringements are, but claims A&E and Telemundo claim sermons that have aired on their network belong to them and requested his account be taken down.
  • “A&E, they had something against us. They raised the flag to YouTube and then they shut us down. It makes no sense whatsoever,” he says.
  • Once they found out their channel had been wiped, Hibbs made a request to YouTube to discover the problem. YouTube refused to take on their request. “They are stonewalling us,” Hibbs says.
  • All of his sermons, teachings, videos were taken down and can no longer be accessed. Hibbs is declaring an “information war” as he is certainly not the first, and most likely not the last conservative to be removed from Big Tech’s platforms.

Read the original article by clicking here.

White House sees YouTube, Facebook as ‘Judge, Jury & Executioner’ on vaccine misinformation

By Nandita Bose

WASHINGTON (Reuters) – The White House has YouTube, not just Facebook, on its list of social media platforms officials say are responsible for an alarming spread of misinformation about COVID vaccines and are not doing enough to stop it, sources familiar with the administration’s thinking said.

The criticism comes just a week after President Joe Biden called Facebook and other social media companies “killers” for failing to slow the spread of misinformation about vaccines. He has since softened his tone.

A senior administration official said one of the key problems is “inconsistent enforcement.” YouTube – a unit of Alphabet Inc’s Google – and Facebook get to decide what qualifies as misinformation on their platforms. But the results have left the White House unhappy.

“Facebook and YouTube… are the judge, the jury and the executioner when it comes to what is going on in their platforms,” an administration official said, describing their approach to COVID misinformation. “They get to grade their own homework.”

Some of the main pieces of vaccine misinformation the Biden administration is fighting include that the COVID-19 vaccines are ineffective, false claims that they carry microchips and that they hurt women’s fertility, the official said.

Social media companies have come under fire recently from Biden, his press secretary, Jen Psaki, and Surgeon General Vivek Murthy, who have all said the spread of lies about vaccines is making it harder to fight the pandemic and save lives.

A recent report from the Center for Countering Digital Hate (CCDH), which has also been highlighted by the White House, showed 12 anti-vaccine accounts are spreading nearly two-thirds of anti-vaccine misinformation online. Six of those accounts are still posting on YouTube.

“We would like to see more done by everybody” to limit the spread of inaccurate information from those accounts, the official said.

The fight against vaccine misinformation has become a top priority for the Biden administration at a time when the pace of vaccinations has slowed considerably despite the risk posed by the Delta variant, with people in many parts of the country hostile to being vaccinated.

The requests to Facebook and YouTube come after the White House reached out to Facebook, Twitter and Google in February about clamping down on COVID misinformation, seeking their help to stop it from going viral, another senior administration official said then.

“Facebook is the 800-pound gorilla in the room when it comes to vaccine misinformation… but Google has a lot to answer for and somehow manages to get away with it always because people forget they own YouTube,” said Imran Ahmed, CCDH founder and chief executive.

YouTube spokeswoman Elena Hernandez said that since March 2020, the company has removed over 900,000 videos containing COVID-19 misinformation and terminated YouTube channels of people identified in the CCDH report. She said the company’s policies are based on the content of the video, rather than the speaker.

“If any remaining channels mentioned in the report violate our policies, we will take action, including permanent terminations,” she said.

On Monday, YouTube also said it will add more credible health information and as well as tabs for viewers to click on.

The senior administration official cited four issues on which the administration has asked Facebook to provide specific data, but the company has been reticent to comply.

These include how much vaccine misinformation exists on its platform, who is seeing the inaccurate claims, what the company is doing to reach out to them and how does Facebook know the steps it is taking are working.

The official said the answers Facebook has given are not “good enough.” Facebook spokesman Kevin McAlister said the company has removed over 18 million pieces of COVID-19 misinformation since the start of the pandemic and that its own data shows that for people in the United States using the platform, vaccine hesitancy has declined by 50% since January and vaccine acceptance is high.

In a separate blog post last Saturday, Facebook called on the administration to stop “finger-pointing,” laying out the steps it had taken to encourage users to get vaccinated.

But the administration official said the blog post did not have any metrics of success. The Biden administration’s broad concern is that the platforms are “either lying to us and hiding the ball, or they’re not taking it seriously and there isn’t a deep analysis of what’s going on in their platforms,” the official said. “That calls any solutions they have into question.”

(Reporting by Nandita Bose; Editing by Chris Sanders and Dan Grebler)

YouTube reinforces guidelines on fighting misleading election content

(Reuters) – Alphabet Inc’s YouTube on Monday reinforced its guidelines on tackling fake or misleading election-related content on its platform as the United States gears up for the presidential election later this year.

YouTube will remove any content that has been “technically doctored” or manipulated or misleads the user about the voting process or makes false claims about a candidate, it said in a blog post.

Google and YouTube have been making changes to their platforms and moderating content as technology and social media companies come under fire for their role in spreading fake news, especially during elections.

While Google has said outright that it would remove election-related misleading content, Facebook Inc has announced limited changes to political ads on its platform.

Twitter Inc banned political ads in November, including those that reference a political candidate, party, election or legislation, in a push to ensure transparency.

Google and YouTube also prohibit certain kinds of misrepresentation in ads, such as misinformation about public voting procedures, political candidate eligibility based on age or birthplace or incorrect claims that a public figure has died.

(Reporting by Neha Malara in Bengaluru; Editing by Anil D’Silva)

Google’s YouTube to pay $170 million penalty for collecting data on kids

FILE PHOTO: Silhouettes of mobile device users are seen next to a screen projection of Youtube logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration

By Diane Bartz

WASHINGTON (Reuters) – Google, which is owned by Alphabet Inc and its YouTube video service will pay $170 million to settle allegations that it broke federal law by collecting personal information about children, the Federal Trade Commission said on Wednesday.

YouTube had been accused of tracking viewers of children’s channels using cookies without parental consent and using those cookies to deliver million of dollars in targeted advertisements to those viewers.

The settlement with the FTC and the New York attorney general’s office, which will receive $34 million, is the largest since a law banning collecting information about children under age 13 came into effect in 1998. The law was revised in 2013 to include “cookies,” used to track a person’s internet viewing habits.

It is also small compared with the company’s revenues. Alphabet, which generates about 85% of its revenue from sales of ad space and ad technology, in July reported total second-quarter revenue of $38.9 billion.

YouTube said in a statement on Wednesday that in four months it would begin treating all data collected from people watching children’s content as if it came from a child. “This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service,” YouTube said on its blog.

FTC’s Bureau of Consumer Protection director Andrew Smith said at a news conference Wednesday the settlement “is changing YouTube’s business model, that YouTube cannot bury its head in the sand, YouTube cannot pretend that it is not aware of the content on its platform and hope to escape liability.”

Once the settlement takes effect, the FTC plans to “conduct a sweep of the YouTube platform to determine whether there remains child-directed content” in which personal information is being collected, Smith said. The FTC could take actions against individual content creators or channel owners as a result.

In late August, YouTube announced it would launch YouTube Kids with separate niches for children depending on their ages and designed to exclude disturbing videos. It has no behavioral advertising.

YouTube allows companies to create channels, which include advertisements that create revenue for both the company and YouTube.

In its complaint, the government said that YouTube touted its popularity with children in marketing itself to companies like Mattel and Hasbro. It told Mattel that “YouTube is today’s leader in reaching children age 6-11 against top TV channels,” according to the complaint.

“YouTube touted its popularity with children to prospective corporate clients,” FTC Chairman Joe Simons said in a statement. “Yet when it came to complying with (federal law banning collecting data on children), the company refused to acknowledge that portions of its platform were clearly directed to kids.”

New York Attorney General Letitia James said the companies “abused their power.”

“Google and YouTube knowingly and illegally monitored, tracked, and served targeted ads to young children just to keep advertising dollars rolling in,” said James.

In addition to the monetary fine, the proposed settlement requires the company to refrain from violating the law in the future and to notify channel owners about their obligations to get consent from parents before collecting information on children.

The two Democrats on the FTC, Rebecca Slaughter and Rohit Chopra, dissented from the settlement. Slaughter, who called the violations “widespread and brazen,” said the settlement fails to require YouTube to police channels that provide children’s content but do not designate it as such, thus allowing more lucrative behavioral advertising, which relies on tracking viewers through cookies.

Senators Ed Markey and Richard Blumenthal, both Democrats active in online privacy matters, criticized the settlement in separate statements.

“A financial settlement is no substitute for strict reforms that will stop Google and other tech companies from invading our privacy,” Blumenthal said. “I continue to be alarmed by Big Tech’s policies and practices that invade children’s lives.”

(Reporting by Diane Bartz Additional reporting by David Shepardson; Editing by Nick Zieminski and Marguerita Choy)

Twitter, Facebook accuse China of using fake accounts to undermine Hong Kong protests

FILE PHOTO: A 3-D printed Facebook logo is seen in front of displayed binary code in this illustration picture, June 18, 2019. REUTERS/Dado Ruvic/Illustration/File Photo

By Katie Paul and Elizabeth Culliford

(Reuters) – Twitter Inc and Facebook Inc said on Monday they had dismantled a state-backed information operation originating in mainland China that sought to undermine protests in Hong Kong.

Twitter said it suspended 936 accounts and the operations appeared to be a coordinated state-backed effort originating in China. It said these accounts were just the most active portions of this campaign and that a “larger, spammy network” of approximately 200,000 accounts had been proactively suspended before they were substantially active.

Facebook said it had removed accounts and pages from a small network after a tip from Twitter. It said that its investigation found links to individuals associated with the Chinese government.

Social media companies are under pressure to stem illicit political influence campaigns online ahead of the U.S. election in November 2020. A 22-month U.S. investigation concluded Russia interfered in a “sweeping and systematic fashion” in the 2016 U.S. election to help Donald Trump win the presidency.

The Chinese embassy in Washington and the U.S. State Department were not immediately available to comment.

The Hong Kong protests, which have presented one of the biggest challenges for Chinese President Xi Jinping since he came to power in 2012, began in June as opposition to a now-suspended bill that would allow suspects to be extradited to mainland China for trial in Communist Party-controlled courts. They have since swelled into wider calls for democracy.

Twitter in a blog post said the accounts undermined the legitimacy and political positions of the protest movement in Hong Kong.

Examples of posts provided by Twitter included a tweet from a user with photos of protesters storming Hong Kong’s Legislative Council building, which asked: “Are these people who smashed the Legco crazy or taking benefits from the bad guys? It’s a complete violent behavior, we don’t want you radical people in Hong Kong. Just get out of here!”

In examples provided by Facebook, one post called the protesters “Hong Kong cockroaches” and claimed that they “refused to show their faces.”

In a separate statement, Twitter said it was updating its advertising policy and would not accept advertising from state-controlled news media entities going forward.

Alphabet Inc’s YouTube video service told Reuters in June that state-owned media companies maintained the same privileges as any other user, including the ability to run ads in accordance with its rules. YouTube did not immediately respond to a request for comment on Monday on whether it had detected inauthentic content related to protests in Hong Kong.

(Reporting by Katie Paul in Aspen, Colorado, and Elizabeth Culliford in San Francisco; Additional reporting by Sayanti Chakraborty in Bengaluru; Editing by Lisa Shumaker)

Study shows cute kids are YouTube clickbait; child advocates concerned

FILE PHOTO: 2019 Kids Choice Awards – Arrivals – Los Angeles, California, U.S., March 23, 2019 – YouTube star Ryan ToysReview. REUTERS/Danny Moloshok/File Photo

By Arriana McLymore

NEW YORK (Reuters) – YouTube videos featuring young children drew nearly triple the average viewership of other content, according to research released on Thursday that provided ammunition for child advocates who want Alphabet Inc. (Google) to take more aggressive steps to make it’s streaming service safer for kids.

Pew Research Center said its findings show videos aimed at or featuring children are among YouTube’s most popular materials, attracting an outsized audience relative to the number uploaded.

Lawmakers and parent groups have criticized YouTube in recent years, saying it has done less than it should to protect minors’ privacy.

Last year, the Center for Digital Democracy and the Campaign for a Commercial-Free Childhood filed a complaint with the Federal Trade Commission (FTC), saying YouTube’s parent company violated the Children’s Online Privacy Protection Act.

The groups complained that the company “has not only made a vast amount of money by using children’s personal information” and has “profited from advertising revenues from ads on its YouTube channels that are watched by children.”

YouTube, which announced 2 billion monthly users in May, shares limited data about its service. But music, gaming and kids’ content generally have been known to rank highly in viewership.

Other groups have called on YouTube to take more steps to block access to age-inappropriate content and prevent predators from getting to clips that could allow them to sexualize minors. Complaints also prompted YouTube to introduce punishments for parents uploading videos in which kids are placed in dangerous situations.

The video unit has become a major driver of revenue growth at Alphabet Inc, and it has said that it is weighing additional changes to how it handles content related to kids.

Pew researchers said in a report that they used automated tools and human review to analyze activity during the first week of 2019 on nearly 44,000 YouTube channels with more than 250,000 subscribers.

Just 2% of the 243,000 videos those channels uploaded that week featured at least one individual that looked under 13 years old to human reviewers. But the small subset received an average of 298,000 views, compared with 97,000 for videos without children, according to the report. The median viewership figures were about 57,000 and 14,000.

Channels that uploaded at least one video featuring a child averaged 1.8 million subscribers, compared to 1.2 million for those that did not, Pew said.

YouTube said it could not comment on Pew’s survey methods or results. It maintained that the most popular categories are comedy, music, sports and “how to” videos.

“We have always been clear YouTube has never been for people under 13,” the company added.

Popular videos with children included those with parenting tips or children singing or dressing up.

YouTube’s policies ban children under 13 from using its main service and instead direct them to its curated YouTube Kids app. But many parents use the main YouTube service to entertain or educate children, other research has found.

(Reporting by Arriana McLymore in New York; Additional reporting by Paresh Dave in San Francisco; Editing by David Gregorio)

French Muslim group sues Facebook, YouTube over Christchurch footage

FILE PHOTO: A woman reacts at a make shift memorial outside the Al-Noor mosque in Christchurch, New Zealand March 23, 2019. REUTERS/Edgar Su

PARIS (Reuters) – One of the main groups representing Muslims in France said on Monday it was suing Facebook and YouTube, accusing them of inciting violence by allowing the streaming of footage of the Christchurch massacre on their platforms.

The French Council of the Muslim Faith (CFCM) said the companies had disseminated material that encouraged terrorism, and harmed the dignity of human beings. There was no immediate comment from either company.

The shooting at two mosques in New Zealand on March 15, which killed 50 people, was livestreamed on Facebook for 17 minutes and then copied and shared on social media sites across the internet.

Relatives and neighbours carry the coffin of Syed Areeb Ahmed, who was killed in Christchurch mosque attack in New Zealand, during a funeral in Karachi, Pakistan, March 25, 2019. REUTERS/Akhtar Soomro

Relatives and neighbours carry the coffin of Syed Areeb Ahmed, who was killed in Christchurch mosque attack in New Zealand, during a funeral in Karachi, Pakistan, March 25, 2019. REUTERS/Akhtar Soomro

Facebook said it raced to remove hundreds of thousands of copies.

But a few hours after the attack, footage could still be found on Facebook, Twitter and Alphabet Inc’s YouTube, as well as Facebook-owned Instagram and Whatsapp.

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organization had launched a formal legal complaint against Facebook and YouTube in France.

Both companies have faced widespread criticism over the footage.

The chair of the U.S. House Committee on Homeland Security wrote a letter last week to top executives of four major technology companies urging them to do a better job of removing violent political content.

(Reporting by Julie Carriat; writing by Richard Lough; editing by John Irish)

The digital drug: Internet addiction spawns U.S. treatment programs

Danny Reagan,a former residential patient of the Lindner Center of Hope, which admits only children who suffer from compulsion or obsession with their use of technology, sits in a common room at the center in Mason, Ohio, U.S., January 23, 2019. REUTERS/Maddie McGarvey

By Gabriella Borter

CINCINNATI (Reuters) – When Danny Reagan was 13, he began exhibiting signs of what doctors usually associate with drug addiction. He became agitated, secretive and withdrew from friends. He had quit baseball and Boy Scouts, and he stopped doing homework and showering.

But he was not using drugs. He was hooked on YouTube and video games, to the point where he could do nothing else. As doctors would confirm, he was addicted to his electronics.

“After I got my console, I kind of fell in love with it,” Danny, now 16 and a junior in a Cincinnati high school, said. “I liked being able to kind of shut everything out and just relax.”

Danny was different from typical plugged-in American teenagers. Psychiatrists say internet addiction, characterized by a loss of control over internet use and disregard for the consequences of it, affects up to 8 percent of Americans and is becoming more common around the world.

“We’re all mildly addicted. I think that’s obvious to see in our behavior,” said psychiatrist Kimberly Young, who has led the field of research since founding the Center for Internet Addiction in 1995. “It becomes a public health concern obviously as health is influenced by the behavior.”

Psychiatrists such as Young who have studied compulsive internet behavior for decades are now seeing more cases, prompting a wave of new treatment programs to open across the United States. Mental health centers in Florida, New Hampshire, Pennsylvania and other states are adding inpatient internet addiction treatment to their line of services.

Some skeptics view internet addiction as a false condition, contrived by teenagers who refuse to put away their smartphones, and the Reagans say they have had trouble explaining it to extended family.

Anthony Bean, a psychologist and author of a clinician’s guide to video game therapy, said that excessive gaming and internet use might indicate other mental illnesses but should not be labeled independent disorders.

“It’s kind of like pathologizing a behavior without actually understanding what’s going on,” he said.

A room at the Lindner Center of Hope's "Reboot" program in Mason, Ohio, U.S., January 23, 2019. REUTERS/Maddie McGarvey

A room at the Lindner Center of Hope’s “Reboot” program in Mason, Ohio, U.S., January 23, 2019. REUTERS/Maddie McGarvey

‘REBOOT’

At first, Danny’s parents took him to doctors and made him sign contracts pledging to limit his internet use. Nothing worked, until they discovered a pioneering residential therapy center in Mason, Ohio, about 22 miles (35 km) north of Cincinnati.

The “Reboot” program at the Lindner Center for Hope offers inpatient treatment for 11 to 17-year-olds who, like Danny, have addictions including online gaming, gambling, social media, pornography and sexting, often to escape from symptoms of mental illnesses such as depression and anxiety.

Danny was diagnosed with Attention Deficit Hyperactivity Disorder at age 5 and Anxiety Disorder at 6, and doctors said he developed an internet addiction to cope with those disorders.

“Reboot” patients spend 28 days at a suburban facility equipped with 16 bedrooms, classrooms, a gym and a dining hall. They undergo diagnostic tests, psychotherapy, and learn to moderate their internet use.

Chris Tuell, clinical director of addiction services, started the program in December after seeing several cases, including Danny’s, where young people were using the internet to “self-medicate” instead of drugs and alcohol.

The internet, while not officially recognized as an addictive substance, similarly hijacks the brain’s reward system by triggering the release of pleasure-inducing chemicals and is accessible from an early age, Tuell said.

“The brain really doesn’t care what it is, whether I pour it down my throat or put it in my nose or see it with my eyes or do it with my hands,” Tuell said. “A lot of the same neurochemicals in the brain are occurring.”

Even so, recovering from internet addiction is different from other addictions because it is not about “getting sober,” Tuell said. The internet has become inevitable and essential in schools, at home and in the workplace.

“It’s always there,” Danny said, pulling out his smartphone. “I feel it in my pocket. But I’m better at ignoring it.”

IS IT A REAL DISORDER?

Medical experts have begun taking internet addiction more seriously.

Neither the World Health Organization (WHO) nor the American Psychiatric Association recognize internet addiction as a disorder. Last year, however, the WHO recognized the more specific Gaming Disorder following years of research in China, South Korea and Taiwan, where doctors have called it a public health crisis.

Some online games and console manufacturers have advised gamers against playing to excess. YouTube has created a time monitoring tool to nudge viewers to take breaks from their screens as part of its parent company Google’s “digital wellbeing” initiative.

WHO spokesman Tarik Jasarevic said internet addiction is the subject of “intensive research” and consideration for future classification. The American Psychiatric Association has labeled gaming disorder a “condition for further study.”

“Whether it’s classified or not, people are presenting with these problems,” Tuell said.

Tuell recalled one person whose addiction was so severe that the patient would defecate on himself rather than leave his electronics to use the bathroom.

Research on internet addiction may soon produce empirical results to meet medical classification standards, Tuell said, as psychologists have found evidence of a brain adaptation in teens who compulsively play games and use the internet.

“It’s not a choice, it’s an actual disorder and a disease,” said Danny. “People who joke about it not being serious enough to be super official, it hurts me personally.”

(Reporting by Gabriella Borter; editing by Grant McCool)

Exclusive: Iran-based political influence operation – bigger, persistent, global

FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Instagram logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration

By Jack Stubbs and Christopher Bing

LONDON/WASHINGTON (Reuters) – An apparent Iranian influence operation targeting internet users worldwide is significantly bigger than previously identified, Reuters has found, encompassing a sprawling network of anonymous websites and social media accounts in 11 different languages.

Facebook and other companies said last week that multiple social media accounts and websites were part of an Iranian project to covertly influence public opinion in other countries. A Reuters analysis has identified 10 more sites and dozens of social media accounts across Facebook, Instagram, Twitter and YouTube.

U.S.-based cybersecurity firm FireEye Inc and Israeli firm ClearSky reviewed Reuters’ findings and said technical indicators showed the web of newly-identified sites and social media accounts – called the International Union of Virtual Media, or IUVM – was a piece of the same campaign, parts of which were taken down last week by Facebook Inc, Twitter Inc and Alphabet Inc.

IUVM pushes content from Iranian state media and other outlets aligned with the government in Tehran across the internet, often obscuring the original source of the information such as Iran’s PressTV, FARS news agency and al-Manar TV run by the Iran-backed Shi’ite Muslim group Hezbollah.

PressTV, FARS, al-Manar TV and representatives for the Iranian government did not respond to requests for comment. The Iranian mission to the United Nations last week dismissed accusations of an Iranian influence campaign as “ridiculous.”

The extended network of disinformation highlights how multiple state-affiliated groups are exploiting social media to manipulate users and further their geopolitical agendas, and how difficult it is for tech companies to guard against political interference on their platforms.

In July, a U.S. grand jury indicted 12 Russians whom prosecutors said were intelligence officers, on charges of hacking political groups in the 2016 U.S. presidential election. U.S. officials have said Russia, which has denied the allegations, could also attempt to disrupt congressional elections in November.

Ben Nimmo, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab who has previously analyzed disinformation campaigns for Facebook, said the IUVM network displayed the extent and scale of the Iranian operation.

“It’s a large-scale amplifier for Iranian state messaging,” Nimmo said. “This shows how easy it is to run an influence operation online, even when the level of skill is low. The Iranian operation relied on quantity, not quality, but it stayed undetected for years.”

FURTHER INVESTIGATIONS

Facebook spokesman Jay Nancarrow said the company is still investigating accounts and pages linked to Iran and had taken more down on Tuesday.

“This is an ongoing investigation and we will continue to find out more,” he said. “We’re also glad to see that the information we and others shared last week has prompted additional attention on this kind of inauthentic behavior.”

Twitter referred to a statement it tweeted on Monday shortly after receiving a request for comment from Reuters. The statement said the company had removed a further 486 accounts for violating its terms of use since last week, bringing the total number of suspended accounts to 770.

“Fewer than 100 of the 770 suspended accounts claimed to be located in the U.S. and many of these were sharing divisive social commentary,” Twitter said.

Google declined to comment but took down the IUVM TV YouTube account after Reuters contacted the company with questions about it. A message on the page on Tuesday said the account had been “terminated for a violation of YouTube’s Terms of Service.”

IUVM did not respond to multiple emails or social media messages requesting comment.

The organization does not conceal its aims, however. Documents on the main IUVM website  said its headquarters are in Tehran and its objectives include “confronting with remarkable arrogance, western governments, and Zionism front activities.”

APP STORE AND SATIRICAL CARTOONS

IUVM uses its network of websites – including a YouTube channel, breaking news service, mobile phone app store, and a hub for satirical cartoons mocking Israel and Iran’s regional rival Saudi Arabia – to distribute content taken from Iranian state media and other outlets which support Tehran’s position on geopolitical issues.

Reuters recorded the IUVM network operating in English, French, Arabic, Farsi, Urdu, Pashto, Russian, Hindi, Azerbaijani, Turkish and Spanish.

Much of the content is then reproduced by a range of alternative media sites, including some of those identified by FireEye last week as being run by Iran while purporting to be domestic American or British news outlets.

For example, an article run by in January by Liberty Front Press – one of the pseudo-U.S. news sites exposed by FireEye – reported on the battlefield gains made by the army of Iranian ally Syrian President Bashar al-Assad. That article was sourced to IUVM but actually lifted from two FARS news agency stories.

FireEye analyst Lee Foster said iuvmpress.com, one of the biggest IUVM websites, was registered in January 2015 with the same email address used to register two sites already identified as being run by Iran. ClearSky said multiple IUVM sites were hosted on the same server as another website used in the Iranian operation.

(Reporting by Jack Stubbs in LONDON, Christopher Bing in WASHINGTON; Additional reporting by Bozorgmehr Sharafedin in LONDON; Editing by Damon Darlin and Grant McCool)

YouTube attacker was vegan activist who accused tech firm of discrimination

Police officers are seen at Youtube headquarters following an active shooter situation in San Bruno, California, U.S., April 3, 2018. REUTERS/Elijah Nouvelage

By Paresh Dave

SAN BRUNO, Calif. (Reuters) – The woman identified by police as the attacker who wounded three people at YouTube’s headquarters in California was a vegan blogger who accused the video-sharing service of discriminating against her, according to her online profile.

Nasim Najafi Aghdam appears in a handout photo provided by the San Bruno Police Department, April 4, 2018. San Bruno Police Department/Handout via REUTERS

Nasim Najafi Aghdam appears in a handout photo provided by the San Bruno Police Department, April 4, 2018. San Bruno Police Department/Handout via REUTERS

Police said 39-year-old Nasim Najafi Aghdam from San Diego was behind Tuesday’s shooting at YouTube’s offices in Silicon Valley, south of San Francisco, where the company owned by Alphabet Inc’s Google employs nearly 2,000 people.

A man was in critical condition and two women were seriously wounded in the attack, which ended when Aghdam shot and killed herself.

Californian media reported that Aghdam’s family had warned the authorities that she may target YouTube prior to the shooting. Her father Ismail Aghdam told The Mercury News that he had told police that she might be going to YouTube’s headquarters because she “hated” the company.

Police said they were still investigating possible motives but Aghdam’s online activities show that she believed YouTube was deliberately obstructing her videos from being viewed.

“YouTube filtered my channels to keep them from getting views,” she wrote on YouTube according to a screenshot of her account. Her channel was deleted on Tuesday.

Writing in Persian on her Instagram account, Aghdam said she was born in Iranian city of Urmiah but that she was not planning to return to Iran.

“I think I am doing a great job. I have never fallen in love and have never got married. I have no physical and psychological diseases,” she wrote.

“But I live on a planet that is full of injustice and diseases.”

Her family in Southern California recently reported her missing because she had not been answering her phone for two days, police said.

At one point early Tuesday, Mountain View, California, police found her sleeping in her car and called her family to say everything was under control, hours before she walked onto the company grounds with a hand gun and opened fire.

The United States is in the grips of a fierce national debate around tighter curbs on gun ownership after the killing of 17 people in a mass shooting at a Florida high school in February. Authorities there failed to act on two warnings about the attacker prior to the shooting, prompting a public outcry.

Aghdam ran a website called NasimeSabz.com, which translates as “Green Breeze” from Persian, on which she posted about Persian culture, veganism and long, rambling passages railing against corporations and governments.

“BE AWARE! Dictatorships exist in all countries. But with different tactics,” she wrote. “They care only for short term profits and anything to to reach their goals even by fooling simple-minded people.”

Complaints about alleged censorship on YouTube are not unique. The video service has long faced a challenge in balancing its mission of fostering free speech with the need to both promote an appropriate and lawful environment for users.

In some cases involving videos with sensitive content, YouTube has allowed the videos to stay online but cut off the ability for their publishers to share in advertising revenue.

Criticisms from video makers that YouTube is too restrictive about which users can participate in revenue sharing swelled last year as the company imposed new restrictions.

YouTube spokeswoman Jessica Mason could not immediately be reached for comment.

(Reporting by Paresh Dave; additional reporting by Parisa Hafezi in ANKARA; Writing by Rich McKay; Editing by Raissa Kasolowsky)