Originally published by CNN
By Charlene Li
In the wake of the Pittsburgh synagogue shooting last month, and more recently the Thousand Oaks, California shooting, the nation is deeply shaken and looking for someone or something to blame.
Understandably, many people are increasingly calling on social media platforms to clamp down even more on hate speech.
The alleged Pittsburgh synagogue shooter used social media site Gab, the alt-right answer to Twitter, to spew hate toward Jews, while the alleged California shooter is believed to have posted on Facebook, derisively chiding those who offer “hopes and prayers” after mass shootings.
The blowback was so furious after the synagogue shooting that Gab had to briefly go offline because no web host would work with the site.
And while I understand the reaction, the solution to curbing the monsters that technology has unleashed is not censorship by social media platforms; it is smarter technology.
Why am I determined to find a way to disrupt hate speech without banning it outright? Let me start by saying that I am an Asian-American female with birthright citizenship. I despise racist and immigrant-baiting hate speech as much as anyone.
But I also know from my family’s personal experience that censorship is equally dangerous when concentrated in the hands of government or a few businesses. Many minorities have fought very hard to have their voices heard. We should protect our right to free speech fastidiously.
Beyond the basic free speech argument, however, there are very good practical reasons for not driving hate speech underground.
When you have a platform like Gab, you can see what the most hateful among us are up to in clear and illuminating detail. Robert Bowers, the alleged Pittsburgh synagogue shooter, for example, left a verifiable trail of hate. Right before the shooting, he wrote: “HIAS (Hebrew Immigrant Aid Society) likes to bring in invaders that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.”
Now, police and prosecutors are carefully combing through this record, and it will be used to lock him up indefinitely.
Similarly, Cesar Sayoc Jr., the alleged pipe bomb mailer, had been posting hateful and provocative messages on Facebook and Twitter. These, too, will no doubt be used against him by law enforcement agencies.
Now, contrast this with what we know about Stephen Paddock, who fired more than 1,000 rounds of ammunition from his Las Vegas hotel room last year, killing 58 people and wounding hundreds more. More than a year later, police know nearly nothing about the killer’s motivation.
Suppressing those spewing hatred does not mean they go away. It means they go underground and are harder to find. A good example is The Daily Stormer, the white supremacist website that advocates for the genocide of Jews. After Charlottesville, The Daily Stormer was forced off of internet server platforms like GoDaddy and Google and, as a result, it moved to the “dark web,” which makes it all but impossible to access except by those in the know.
Do you really think the anti-Semites and the Holocaust deniers who use the site have stopped hating and stopped planning, simply because it’s hard to find and connect with each other? The answer is no. There’s an old saying that “sunshine is the best disinfectant.” My strong preference is to keep hate speech right where it can be seen, rather than hidden away in the shadows.
So, what is an answer to this deluge of hate on the internet?
Imagine technology that could help predict when someone spewing hate speech seemed predisposed to now move into the realm of a hate act. There is a trove of information on social media platforms and in news stories about people who have gone from hate speech to committing hate acts.
Applying artificial intelligence and machine learning to what we already know might lead to predictive models and interventions that could prevent another massacre. But there are inherent dangers to such an approach. Such risk assessment algorithms are used today in the criminal justice system, but they are deeply flawed, displaying a bias against blacks.
There are two things that technology platforms and law enforcement can do to improve the technology used to detect hate speech and identify potential hate crimes. The first is to adjust the algorithms being used to identify hate speech by applying journalism expertise. Putting trained journalists and editors on the teams that create the algorithms in the first place could help bake in some natural brakes to hate speech and fake news.
The second is for law enforcement to test and address the bias already built into risk assessment scoring algorithms — and to provide transparency into the inputs, results and impact of these programs. Clearly, the technology must be improved to eliminate that kind of bias, but my belief is that smarter technology created with transparency will possibly prevent some hate acts down the road.
Charlene Li is the author of several books, and works with business leaders around digital, social, and emerging technologies in her role as Principal Analyst at Altimeter, a Prophet company.