374364768_e7f56960e3_b

In October, game developer Brianna Wu was forced out of her house in Arlington and went into hiding after strangers from the GamerGate online mob posted personal threats at Wu and her family on Twitter. Wu and the scores of other women who face personal hateful speech on the Internet may one day have more options to deter their attackers.

Today the justices of the Supreme Court heard arguments about online abuse perpetrated on sites like Twitter and Facebook, the first case of its kind to reach the nation’s highest court. 

At the center of the case is Pennsylvania resident Anthony Elonis. After his wife left him in 2010, he wrote bone-chillingly graphic Facebook posts about killing her, and then, when an FBI agent visited his house to investigate, he made comments online about killing her too.

Elonis was convicted by a jury and served jail time. Elonis’s lawyer claimed today that Elonis never meant to kill his wife, and his right to write about it “artistically” on Facebook is protected under the First Amendment. To convict, Elonis’s attorney argued today, a court must have proof that Elonis was going to act on his words.

That’s not how Elonis’s wife saw his actions — his post made her feel afraid, and she said as much during his trial.

The ability to cause fear ought to be the litmus test of a “true threat” online, the Department of Justice attorney explained today. He argued that abuse online should be judged how a “reasonable person” — an independent third party — would respond to them, after knowing the context in which the words were written.

Two groups that protect victims of domestic abuse were among the “friend of the court” briefs that were submitted, aligning themselves with the DoJ attorney’s case.

Paulette Moore from the National Network To End Domestic Violence wrote that the group has “seen the victims they represent suffer the devastating psychological and economic effects of threats of violence, which their abusers deliver more and more often via social media.”

Also, Moore adds, such threats “have very real and very damaging consequences on the victims’ daily lives,” not least because of the fear they cause but also because threats are indicators of actions to follow.

At last count, 90 percent of the threats that the NNEDV responded to over a year used technology and a third of those came via social media, including Facebook. That was back in 2012.

So should social media companies block such comments, or host racist and misogynistic posts on their networks? Companies like Twitter, Facebook, and Reddit confront this dilemma daily, and historically (and not surprisingly) have favored the free speech argument for keeping posts up.

But gradually, they have begun to take action. Recently, Twitter partnered with Women, Action and the Media, a Cambridge-based group that is independently studying ways in which women face intimidation and threats of various kinds online, with a view to shielding them against such abuse in the future.

If the court came down on the government’s side, it could have far-reaching implications for what people can say, both online and off, says Sarah Jeong, writer and recent graduate of Harvard Law School and former editor-in-chief of the Journal on Law & Gender.

“You’d have to be more careful at a political rally about what you say, you’ll have to be careful about what you say in a classroom, in public on the Internet as well, but really in every medium,” she said.

But she believes that the court will ultimately side with Elonis. She says that in fact, some regulations already on the books could protect those who are being threatened online, the larger issue is that law enforcement has done little to actively respond to these threats.

Photo via Flickr user Tacomabibelot

Loading comments...