Contact B'nai B'rith

1120 20th Street NW, Suite 300N Washington, D.C. 20036

info@bnaibrith.org

202-857-6600

Picture

By Michelle Chabon

On Oct. 13, 2015, Micah Lakin Avni was in an important business meeting in Tel Aviv when his mother called his cellphone.  
Avni’s mother relayed how terrorists had committed an attack in Armon Hanatziv, her Jerusalem neighborhood, and that she hadn’t yet heard from Avni’s father, Richard Lakin. (Avni, his son’s last name, is the Hebrew version of Lakin.)

Avni rushed to Jerusalem, calling area hospitals along the way. Finally, a nurse at Hadassah Medical Center told him his 76-year-old father was critically wounded and in surgery. Two weeks later, he succumbed to his injuries.

Lakin, a former elementary school principal in Connecticut who moved to Israel in 1984, had been repeatedly shot and stabbed on a public bus by two Hamas-affiliated men from adjoining Arab neighborhoods in Jerusalem. It was one of the first of dozens of terror attacks perpetrated by Palestinians as young as 13 from the eastern part of Jerusalem and the West Bank starting in September 2015 through well into 2016.

Many of these attacks were allegedly fueled by lies—spread on social media and in the mosques—that Israel was planning to deny Muslims access to the Al Aqsa mosque on the Temple Mount. The allegation that social media companies aren’t doing nearly enough to stop the spread of cyberterrorism and anti-Semitism—and may in fact be abetting them—has spurred Avni and others, including victims of Islamic terror attacks in Paris and Orlando, to file lawsuits against Facebook. They hope that the threat of potentially huge financial payouts will pressure Facebook and other companies to block hate messages and content.

While watching his father’s condition deteriorate, Avni said, “I sat there thinking, ‘How did this happen? What makes two 20-year-old Palestinians from middle-class families do something so horrific? What’s causing the pace and growth of terrorism so quickly around the world and in Israel?’”

During one of his marathon internet searches on various social media platforms, Avni came across a “horrific” reenactment of the attack in which his father was murdered. “That video went completely viral, and its purpose was to encourage others to carry out similar attacks,” he said. Determined to act, Avni contacted Shurat HaDin, an Israeli law center that represents terror victims and their families. Since 2000, the center has collected more than $200 million of the $2 billion various courts have awarded its clients.

Avni became one of the 20,000 petitioners who sued Facebook in a landmark Oct. 26, 2015, lawsuit filed by Shurat HaDin. That suit, known as Cohen v. Facebook, sought an injunction against the company that would require it to monitor and prevent terrorist incitement against Jews and Israelis.

As the wave of terror intensified, reaction to false rumors about access to the Al Aqsa mosque increased. Shurat HaDin sensed it would have an even stronger case against Facebook if American citizens sued the company. In July 2016, it filed a $1 billion lawsuit, Force v. Facebook, on behalf of Taylor Force, an American Christian murdered by a Palestinian terrorist in Israel, and on behalf of Lakin and four other families of terror victims.


Picture The picture, titled “stabbing,” is posted on the Al-Quds (the Arabic name for intifada) Facebook page.

The suit, which the court has joined to Cohen v. Facebook, alleges that Facebook has violated the U.S. Anti-Terrorism Act by “knowingly” providing material support and resources to Hamas. This support has boosted the terror group’s ability to “recruit, radicalize, instruct terrorists, raise funds, create fear and carry out attacks,” the suit alleges.

Facebook has denied the allegations and sought dismissal of the lawsuits. As this issue went to press, a hearing has been scheduled for March 1.

Facebook did not respond to repeated inquiries from B’nai B’rith Magazine related to Shurat HaDin’s two lawsuits and this article. However, in January, the company took down more than 100 pages linked to Hamas, the governing authority in the Gaza Strip that the United States government has termed a terrorist organization.

The Anti-Terrorism Act has made it possible for U.S. citizens who were victims of terror attacks, or their bereaved families, to sue governments like Libya and Iran that fund, arm and give refuge to terror groups. Four of the five victims in this instance were dual American-Israeli citizens.

But anti-terrorism suits aimed at social media are new, and it remains to be seen whether courts will hold Facebook, Twitter, Google, YouTube or Instagram responsible for content they disseminate but do not generate. “Facebook has zero tolerance for terrorism,” its attorney said in court filings.

In what may have been an important precedent, in August 2016, U.S. District Judge William Orrick dismissed a suit filed against Twitter by families of contractors murdered in an ISIS terror attack in Jordan.

The judge said the company could not be held responsible for aiding terrorism under Section 230 of the Communications Decency Act, which states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” He also cited lack of evidence that the attackers were radicalized by images they saw on Twitter.

Internet providers and social media companies insist the act absolves them, the “messengers,” of any responsibility for the content they disseminate. 


Picture“It happened on a day like today, 1 Oct 2015, Al-Quds Intifada,” referring to the “Stabbing Intifada” in 2015—posted on the Women for Palestine Facebook page.

Digital Hate Happens
But cyberterrorism is just one example of the many types of hate spread via social media platforms against Jews and others.
 
“The level of online anti-Semitism over the past few years has been more than we’ve ever seen before,” said Oren Segal, who directs the Anti-Defamation League (ADL) Center on Extremism. “Extremists are specifically targeting various communities, including the Jewish community and Jewish journalists.”

An October 2016 report by the ADL’s Task Force on Harassment and Journalism detected a “disturbing upswing” in online anti-Semitic abuse driven in large part by “rhetoric in the 2016 presidential campaign.”

From August 2015 and July 2016, the watchdog identified 2.6 million anti-Semitic tweets overall, more than 19,000 of them directed at Jewish journalists.

Sixty-eight percent of these tweets were sent from 1,600 Twitter accounts, out of 313 million existing Twitter accounts. Those 2.6 million anti-Semitic tweets had 10 billion views, so they “contributed to reinforcing and normalizing anti-Semitic language—particularly racial slurs and anti-Israel statements—on a massive scale,” according to the task force.

Gabriel Weimann, a Haifa University expert in cyber-terrorism, believes it is important to distinguish between cyberterror and other forms of cyber-hate.

While cyber-shaming and cyberbullying can have extreme consequences, including suicide, he said, “very often the intent isn’t to cause physical harm.” The aim of cyberterrorism, in contrast, is 100 percent violent.

Weimann said young Palestinians who participated in the most recent wave of attacks tended to be “very active” on social media platforms and became “very radicalized” by what they saw. The videos showed who should be targeted with a knife: Israeli police, soldiers, settlers and other identifiably Jewish targets. Viewers were also instructed on the best time of day to kill and which body part is most vulnerable to attack. “There were even videos showing what kind of knife or machete to use,” Weimann said.


PictureNitsana Darshan-Leitner is the founder of Shurat HaDin, an Israeli law center that represents terror victims and their families, and the driving force behind the Facebook lawsuits.

Nitsana Darshan-Leitner, Shurat HaDin’s founder and the driving force behind the Facebook suits, says Facebook and other social media platforms are a terror cell’s favorite tool.

For the past few years, she alleged, “Facebook has connected those who incite to kill Jews with those who want to do so.” Terror groups, she said, “are using it to raise funds, to connect and to reach out to potential members. Facebook is letting them freely, openly, knowingly use its platform to aid and abet terrorism.”

The fact that users, not the social media companies, are funding terrorists or inciting violence “does not eliminate their responsibility,” Darshan-Leitner said.

Asked whether her plaintiffs would drop their $1 billion suit if Facebook agreed to take steps to police itself, she said, “No. Facebook must pay damages. The only thing these megacompanies know is business. If they get hit in their pocketbook, they will reconsider their actions and change them, much like the banks did,” referring to successful lawsuits filed against banks that allegedly aided and abetted terror groups.

“The only thing that moved banks to make sure the money in their possession was terror free and not transfer money to terrorists were the billion-dollar lawsuits filed against them. Money is the oxygen of terrorism,” Darshan-Leitner said.


Picture“We make the red lines…,” featured on the Al-Quds Facebook page.

A Call to Action
Daniel S. Mariaschin, executive vice president and chief executive officer of B’nai B’rith, said, “There needs to be a Manhattan Project to confront the many threats that have grown out of the internet, which has provided a new way to convey hatred, terrorism and incitement.”

Mariaschin envisions a joint effort between B’nai B’rith, which has status at both the United Nations and the Organization of American States, and others committed to the fight against cyberterror and cyberhate, including anti-Semitism.

“The challenges are great, the opportunities are there, and the next step is for us to either initiate or join existing efforts,” he said.

Richard Heideman, who served as international president of B’nai B’rith from 1998 to 2002 and is a partner in the law firm Heideman, Nudelman & Kalik, believes, “Holding supporters of terror accountable in U.S. courts is an essential tool in seeking justice.” Heideman’s firm has filed several successful lawsuits on behalf of Israeli and other terror victims.

One of those suits, which sought compensation from the Libyan government for its supportive role in the 1985 hijacking of an Egypt Air flight and the targeted killings of American and Israeli passengers, “helped bring Muammar Gaddafi and Libya to reach an agreement with the U.S. in 2008 that resulted in Libya coming off the State Department’s terror list,” Heideman said. That agreement included a $1.5 billion payment to victims of Libyan state-sponsored terrorism.

The Free Speech Dilemma
Some free speech advocates believe litigating against Facebook, Twitter and others to force them into policing themselves would ultimately lead to censorship.

“If Facebook were responsible for the legality of everything you or I or others say on Facebook, it would be tremendously expensive and a great disincentive to provide an open platform,” Daphne Keller, director of Intermediary Liability at the Stanford Center for Internet and Society, told Bloomberg News. “And it would give them every reason to take down too much speech, to take down perfectly legal speech to avoid risk to themselves.

Yair Rosenberg, a writer for the Jewish magazine Tablet, is one of the 10 Jewish journalists most targeted by anti-Semites on Twitter, according to the ADL. Though he believes social media companies “have an obligation to try to weed out abusive behavior and harassment on their platforms,” he does not think they should be censoring non-abusive content, no matter how repugnant.

“Besides this being impractical when it comes to millions of tweets or posts, it also seems troubling to empower giant corporations to police what constitutes an acceptable opinion on the internet,” Rosenberg said. “The best answer to hateful speech online is better counter-speech from the majority of non-hateful users—a bottom-up response, rather than top-down.”

Rosenberg said those who identify or experience online cyber-hate can report abusive accounts and work to draw attention to them in publicity campaigns, to ensure the companies are taking them seriously. “But again, I’d distinguish between abusive behavior on a social media platform and non-abusive but hateful content.”

The journalist is skeptical that lawsuits like Avni’s will succeed, “at least in America, given our First Amendment, and I don’t think they’re the best way to fight this sort of problem, either. Censoring bigotry doesn’t make it go away, it just makes it easier to ignore, until it has unignorable consequences. I’d rather that society face up to this material head-on,” Rosenberg said.


PictureTaylor Force (right), standing with his family, was a 29-year-old U.S. army veteran and an MBA student at Vanderbilt University visiting Israel to learn about Israeli business and high tech when Mohammed Massalha, a Palestinian terrorist, murdered him in Jaffa.

However, in a clear bid to preempt these and future lawsuits and potentially huge payouts if they lose what promise to be several court cases, on Dec. 5, 2016, Facebook, Microsoft, Twitter and YouTube announced they are “coming together” to curb the spread of terrorist content online.

“There is no place for content that promotes terrorism on our hosted consumer services,” they said in a joint statement. “When alerted, we take swift action against this kind of content in accordance with our respective policies.”

The companies vowed to create a shared industry database of “hashes”—unique digital “fingerprints”—“for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services.”

By sharing this information with one another, they said that they hope to identify and remove “the most extreme and egregious terrorist images and videos”—content most likely to violate their respective companies’ content policies.

Following the huge backlash against Facebook for sharing fake news stories during the presidential campaign, in mid-December the company said it will try to identify such stories with the assistance of five fact-checking organizations and through reader feedback.

Lakin’s son Avni insists that if Facebook can create a system to flag fake news, it can identify and block terror-related content.
“Its algorithms advertise to you and they monitor everything going on. They target you based on that information. They block child pornography and they can do the same with terror. For years, they chose to ignore that Hamas was operating an entire campaign on Facebook,” Avni asserted. “And they make money in the process.”

The December 2016 lawsuit against Facebook, Twitter and Google by the victims of the terrorist attack in Orlando also accuses the providers of “profiting from postings through advertising revenue.”