In the Internet age, doing something in the privacy of your own home or dorm room, doesn’t always mean it’s private. The third party doctrine permits police and campus authorities alike to access online correspondence with ease, said a panel of experts in law and civil rights Tuesday evening as part the Leadership Legacy Lectures Series at Konover Auditorium.
This is the “big loophole” of privacy protection in the United States, said Kristin Kelly, an associate professor of political science at the University of Connecticut who moderated “Angry Tweet or True Threat?”
“Every time you give information to a third party your expectation of privacy is basically eliminated,” Kelly said. “Everything on the UConn server is available and certainly something like Community Standards would have access to your email.”
Beyond complying with ongoing investigations, many private companies are also coming forward with problematic content unprompted, explained Jayne Hitchcock, president of Working to Halt Online Abuse, or WHOA.
“Google has basically turned in people who used certain phrases or certain words,” Hitchcock said. “Remember, it’s not your email. It’s Google’s email that you’re using for free.”
David McGuire, a staff attorney for the American Civil Liberties Union of Connecticut, said that this type of surveillance only results in legal action when there is a “true threat” to a person or group of people.
Currently, the U.S. Supreme Court is considering the case of Elonis v. United States, in which it is being asked to overturn the conviction of a Pennsylvania man who targeted his ex-wife and police with threatening rap lyrics on Facebook. McGuire said this will determine whether the victim’s perception of a threat is enough to constitute a true threat under the law or if the speaker must intend to act on their words.
The ACLU represents such controversial clients, from the Ku Klux Klan to homophobic high schoolers, because they fear there will be a “chilling effect” on speech if it is allowed to constitute a criminal offense, he said.
“We do this because we protect everyone’s free speech rights. The idea is that the first target of government suppression is not going to be the last,” McGuire said. “We have to be careful not to let go of our really strong free speech protections because once they’re gone, they’re gone.”
One approach to preventing widespread prosecution of individuals for online speech internationally is intermediary liability, holding social media companies and other websites responsible for monitoring user content. More often than not, though, private companies choose to eliminate comment sections altogether, preventing meaningful discussion of free speech in the online world, said Molly Land, a professor of law at UConn.
“I’m not sure they’re well equipped to do it, they’re making policy decisions that I think should be left in the hands of government,” Land said. “There’s no way to challenge their decisions.”
While the panel agreed that free speech issues should be decided on a case-by-case basis, Land emphasized that the permanent accessibility of speech on the Internet may force the law to take a new approach.
“Hateful speech on the Internet in some ways has greater consequences than offline speech,” she said. “I’m not suggesting that we should have different rules for the Internet, but that by applying the same rule, looking at things proportionally, some of the unique features of the Internet might actually lead to more regulation.”
For Hitchcock, who connects victims of online abuse to computer forensic specialists to help identify their perpetrators through WHOA, it comes down to people taking their language online as seriously as what they say in person.
“It’s personality accountability. When they get caught, then they regret it. They don’t realize it’s perceived anonymity,” Hitchcock said. “You really need to think before you hit that send button.”