Swaying with the Algorithm: How Twitter Allows Abuse and Manipulation

Confident professional multiethnic coders discussing programming language: hipster young man pointing at screen and explaining programming algorithm

How reflective of your likes and interests is your Twitter feed? And who’s behind deciding what you see in the first place? The social media platform would say “you,” but a skeptical public is no longer confident. Over the past several months, Twitter’s algorithm practices are in question by nearly everyone, including CNN, PBS, the Washington Post, and Twitter users themselves. There is a strong argument that social media algorithms helped incite the recent post-election violence. 

Why? Because something, as they say, is rotten in the state of cyberspace. Hate-speech and harassment, disguised as paid content and “helpful” content suggestions, regularly miss the mark. And the social media giant’s algorithms are taking the blame.

What’s an algorithm, exactly?

As defined by Wikipedia, an algorithm “is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.” Sounds innocent enough, right? It is. It’s nothing more than an aspect of computer science.

However, Yale data scientist Elisa Celis (who studies fairness and diversity in artificial intelligence) explains that companies like YouTube, Facebook, Twitter, and others refuse to reveal what’s exactly in their respective algorithm’s codes. Most, she says, seem to “revolve around one central tenet: maximizing user engagement­­—and, ultimately, revenue.”

So, are Twitter’s algorithms nothing more than a money-making tool? On the surface, yes. It’s learning what a user’s behaviors are while engaging with the content on the platform: The articles shared, the search terms used, and so on. The idea is to take that data and translate it to relevant products and services. 

These things aren’t malicious, and they’re not out of control,” states Celis in PBS “Nova” reporter Katherine J. Wu’s article, “Radical ideas spread through social media. Are the algorithms to blame?“. “But it’s also important to acknowledge that these algorithms are small pieces of machinery that affect billions of people.” As Wu puts it, at what point does personalization cross the line to polarizing? The algorithms can’t tell the difference between boating and bigotry, and they aren’t trying to.

Who is to blame? 

Like any tool, however, Twitter’s algorithm can be used for benevolent, benign, or malicious purposes. The question is, how influenced are we by them, and more importantly, who is behind the influence? “If the global reach of social media were being used merely to spread messages of peace and harmony—or just to make money—maybe there wouldn’t be any [harm]. But the purposes are often darker,” writes Bloomberg reporter Shelly Banjo

According to the tech companies that implement them, these programs exist only to help and serve you, the user. In essence, they are saying, “Yes, turning a profit is the ultimate goal, but not before bringing you relevant, customized stories, news information, and products based on your likes and dislikes. You’re the one in control, not us. And if you act out based on content fed to you, then that’s your fault, not ours. It’s your interests and online behavior that caused it to appear in the first place.”

Do (but don’t) be influenced by media

It’s the same illogical mentality behind the idea of product placement in television and movies: Don’t be influenced by the sex and violence on the screen, just the BMW and Coke that happen to be there. If content leads a person to act out in a way other than shopping, especially any negative way, that’s on them. Wu notes, “It would be an oversimplification to point to any single video, article, or blog and say it caused a real-world hate crime. But social media, news sites, and online forums have given an indisputably powerful platform to ideas that can drive extreme violence.”

Maybe all you do is look at hilarious cat videos and share links to your favorite recipes. Think your feed is safe? Think again. In “Facebook, Twitter and the Digital Disinformation Mess,” Banjo also highlights how “social media manipulation campaigns” have been utilized by governments and political parties in 70 countries, including China, Russia, India, Brazil, and Sri Lanka. Circumventing and outsmarting social media firewalls and algorithms, state-sponsored smear campaigns in these countries utilize artificial intelligence and internet bots to flood targeted news feeds with extremist messages and videos. The technology to do this exists, and it’s happening now. 

Yet, not all algorithms exist to sway your purchasing decisions or serve tech-giant masters. One promising solution was presented by Binghamton University late last year. Computer scientist Jeremy Blackburn, along with a team of researchers and faculty, “have developed machine-learning algorithms which can successfully identify bullies and aggressors on Twitter with 90 percent accuracy.” While not perfect, it’s important to note that this technology also exists, and it’s a bright ray of hope. 

Abuse on Twitter a regular occurrence

This concern over the unchecked power of Twitter, et al and their algorithms cross party lines and media bias, affecting celebrates and everyday citizens alike. (Even actor Sasha Baron Cohen uses the Trump-popularized phrase “fake news” stating in his op-ed piece for the Washington Post that online, “everything can appear equally legitimate.”) He isn’t alone in his criticisms. Fed up with the onslaught of abuse and hate speech, fellow celebrities including Ed Sheeran, Millie Bobby Brown, and Wil Wheaton have limited their presence on Twitter—and have been quite vocal about doing so. 

While one might argue that living in the public eye comes with consequences, those not in the limelight are equally disgruntled with the social media platform’s refusal to address rampant harassment. Every day, average users continue to question why nothing operates on the platform to combat abuse. Especially critical of Twitter CEO Jack Dorsey, those active on the social media platform call him out for continually refusing to address cyberbullying concerns. In The Atlantic article “Twitter’s New Features Aren’t What Users Asked For,” author Taylor Lorenz shares one frustrated user’s tweet. “The annoying thing is that every few months Jack comes out with a big speech about how they’re going to fix twitter, and ever[y] time they just continue to get it wrong.”

And what of the onslaught of abuse and harassment suffered by private citizens who find themselves thrust into the spotlight as a result of sloppy reporting? Or peer-to-peer cyberbullying occurring across the personal devices of children and teenagers every day? What fills the Twitter feeds of their tormentors? As Wu states, “[Algorithms] don’t have a conscience that tells them when they’ve gone too far. Their top priority is that of their parent company: to showcase the most engaging content—even if that content happens to be disturbing, wrathful, or factually incorrect.” Are abusers fed more and more volatile articles and videos, which in turn fans the flame of the hate and anger they unleash on others?

Twitter slow to respond to user demands

Although Twitter states that combating abuse is a “work in progress,” the company instead chooses to implement useless updates and changes that are, in some instances, only making it easier to engage in harassment. Lorenz adds, “While the company continues to dedicate time and resources to making minor changes aimed at boosting engagement, easy fixes for harassment are ignored.” Most recently, Twitter purged an untold number of QAnon conspiracy theorists, but this one-time housecleaning will not solve how algorithms move the speech on Twitter.  

Lorenz reports that in 2016, Online Abuse Prevention Initiative founder Randi Lee Harper laid out several improvement options in a Medium post. Although most were addressed by Twitter eventually, several suggestions that addressed minimizing harassment were ignored. Instead, some of the “updates” the social media platform chose to rollout were mostly cosmetic:

  • changing its user avatars from square-shaped to circular
  • redesigning Moments
  • adding topic tags to the Explore page
  • spamming users’ timelines with a “happening now” section
  • adding endless notifications
  • upping the character limit to 280
  • promoting live videos of sports events
  • revamping its algorithm to give older tweets more prominence

Taking Twitter to task

Close on that last one, Twitter, but you miss the mark again. An algorithm revamp, but of a different sort, is what the public is demanding. New on the media scene (compared to that of television, movies, and the radio), social media’s persuasive power has remained largely unchecked, and the law is desperately trying to catch up. 

In his op-ed piece, Baron Cohen brings to light a chilling fact: the large technology companies behind these platforms are, for the most part, beholden to no one—not even the law: 

“These super-rich “Silicon Six” care more about boosting their share price than about protecting democracy. This is ideological imperialism—six unelected individuals in Silicon Valley imposing their vision on the rest of the world, unaccountable to any government and acting like they’re above the reach of the law. Surely, instead of letting the Silicon Six decide the fate of the world over, our democratically elected representatives should have at least some say.”

The “Silicon Six” Baron Cohen refers to are American billionaires and tech giant CEOs and/or founders Mark Zuckerberg (Facebook), Sundar Pichai (Google), Larry Page (Google), Sergey Brin (Google), Susan Wojcicki (YouTube), and Jack Dorsey (Twitter). Similarly, Wu notes that one of the biggest reasons to be wary of social media companies’ algorithms is that, “[only] a limited subset of people are privy to what’s actually in them.” 

In his article for The Verge, reporter Casey Newton writes that while Baron Cohen efforts to amend Section 230 of the Communications Decency Act (the driving force behind his speech and opinion piece) are somewhat misguided, he raises some valuable points. Newton agrees with him about not only the dangers of algorithmic recommendations on social platforms but that the aforementioned “Silicon Six” have been permitted so much influence “thanks to a combination of ignorance and inattention from our elected officials.” 

Data journalist Meredith Broussard, communications expert Safiya Noble, and computer scientist Nisheeth Vishoni (all interviewed for Wu’s article for “Nova”) feel social media algorithms should be tested and vetted as strenuously as drugs before they hit the market. 

Noble further states, “We expect that companies shouldn’t be allowed to pollute the air and water in ways that might hurt us. We should also expect a high-quality media environment not polluted with disinformation, lies, propaganda. We need for democracy to work. Those are fair things for people to expect and require policymakers to start talking about.” These companies can’t police themselves, nor should they. If social media companies do not change their ways, then our elected officials in Washington should change the rules for them. 

Todd McMurtry is a nationally recognized attorney whose practice focuses on defamation, social media law, cyberbullying, and professional malpractice. You can follow him on Twitter @ToddMcMurtry.