The bias in our software
Your software is not neutral. Whether you’re Facebook, Twitter, Airbnb, or any other web-based service, you are not an unbiased third-party to the users of your software. It’s time to stop pretending you are.
Recently, after Facebook was confronted with charges of suppressing conservative-leaning news, there was a strong push from the company to convince the public at large that it remained neutral and had never and will never present news with bias. While it might be true that there was no overt wrongdoing, let’s be clear about this: Facebook does influence what you see and what you don’t.
Maybe it’s the algorithm , showing you what the computers think you’re most likely interested in, content that will keep you on the site longer to view more advertisements. Maybe it’s human intervention, faceless editors deciding what is and isn’t newsworthy to promote to the 1.5 billion people using the service. Or maybe it’s by your own doing , deciding who you’re friends with, which friends are hidden from your Newsfeed, and other actions you’ve taken while logged into the network that’s helped them build a profile of your interests. Whatever the case, there is nothing neutral about the way Facebook presents information to its users.
Who are the editors, the decision makers at Facebook? Do they represent the diversity of the United States? Of the world? Could a lack of diversity in its ranks explain why it took years for images of breastfeeding to be allowed on Facebook (and Facebook-owned Instagram), while the Confederate flag has always been seen as free expression? How many white men were in the room where that decision was made? How many women of color?
No, Facebook is not neutral at all. But they’re not alone in their belief that they provide the public with a service on par with a utility. Bloomberg’s Sarah Frier writes, “Twitter Inc.’s website has seen major wars of words in the U.S. presidential race, giving rise to passionate voices on all sides — including those who are racist — but that’s what the service is for,” which is the argument of Omid Kordestani, Twitter’s executive chairman. It wouldn’t be a stretch for one to speculate that this is why Twitter stood quietly on the sidelines for so long, rather than addressing their abuse problem. All these years, the people in their boardroom saw Twitter as a neutral platform enabling free speech.
What Twitter has yet to realize is that an important part of protecting free speech is protecting the most vulnerable. In Anil Dash’s words: “Allowing abuse hurts free speech. Communities that allow abusers to dominate conversation don’t just silence marginalized people, they also drive away any reasonable or thoughtful person who’s put off by that hostile environment. Common sense tells us that more people will feel free to express themselves in an environment where threats, abuse, harassment, or attacks aren’t dominating the conversation.”
Twitter believes it’s neutral, but like almost all software, their code makes decisions for us. Because I follow Anil Dash, Twitter’s Connect feature suggested similar users in the technology space for me to follow. It recently recommended over half a dozen white men — not one woman, not one person of color. To argue that Twitter doesn’t have a hand in whose voices are heard would be a lie. Twitter would do right by its users and shareholders to understand and respect that responsibility.
And then there’s Twitter’s algorithmic timeline — similar to Facebook’s Newsfeed — that shows you the “best” tweets first. No definition of best is provided, beyond what one could assume are tweets that are more likely to garner your likes and retweets. That is, by its very nature, not a neutral feature.
It’s hardly surprising that networks are trending this way. Log in and you’re inundated with information from every angle. To solve that problem, particularly for new users, but also for advertisers who want to be sure our attention is rapt, companies like Twitter and Instagram are moving away from the time-ordered feed. Many worry that this means missing an update from a friend, but I sense a bigger issue: Do algorithms have the same biases that humans do? Do the rich get richer while those from underrepresented backgrounds get pushed further into the margins? Have we really considered how those features, along with those with infamously quick block fingers, silence people with less privilege than Silicon Valley elites?