This weekend, I was playing a game of Quip-lash (a local multiplayer game in the spirit of Apples to Apples) with a group of friends when I made an amusing observation: I said to my friend seated next to me, “It’s as if an algorithm generated all of our responses. I can’t determine one person’s jokes from another.”
“Huh,” she nodded, looking at the screen.
It may be natural for a group of friends to share a brand of humor to the point where, if someone fed all of our quips to an algorithm, it would produce an appropriately styled joke. It’s not natural, however, that real computer algorithms affect our media consumption based on our perceived interest and the interests of the increasingly privatized internet. Instagram, Facebook, and YouTube have all created their own algorithms that produce personalized content and advertisements according to the user’s consumption history.
With the seemingly random rise in popularity of the “Johny Johny” and other bizarre and apparently meaningless children-oriented videos, the internet has become increasingly fascinated with and perplexed by what exactly drives internet algorithms to suggest content. The New York Times published a concerning article about the disturbing children’s videos that slip past the YouTube content filter, into the autoplay feature produced by YouTube’s algorithm and onto the screens of young children’s devices. These videos include sinister themes of violence, sexual assault, and other traumatic content.
Now, I remember discovering YouTube in middle school and watching poorly drawn MS paint animations with titles like “Spongebob shoots Patrick with a Machine Gun.” But the current graphic content involving beloved children’s characters is much more skillfully produced and accessible to much younger children using YouTube and other streaming services.
Producing traumatic content marketed to vulnerable children is one of the more appalling examples of algorithms—other times the negative effects can be a little more elusive to pinpoint.
Another piece, in the Huffington Post, calls for accountability from Google to monitor content produced by its search engine algorithms. Suggestions for Google searches come from algorithms manufactured to be relevant to the search inquiry, but author Frank Pasquale argues that such algorithms seem to have their own agenda entirely. For example, if you search for phrases about the Holocaust, some of the top results produced by Google are from anti-semitic sources that deny the Holocaust ever happened. Other sources claim searching for “hands” on Google Images produces primarily photos of white hands, and searches for “criminal” produces primarily photos of black people.
Algorithms are largely responsible for the advertisements and “sponsored” content that appear on our Instagram and Facebook feeds. Sometimes, the accuracy of targeted ads can be alarming. A few years ago, Forbes published an article about advertisements that could tell when a consumer was pregnant before they have told their close friends. Advertisers analyze the consumers’ content preferences and market related products to them—even the most subtle consumer choices can inform advertisers on their personal lives, including their sexual preferences. A friend of mine reports that Instagram ads knew about his sexuality before he even came out to his family and friends. To get recommended LGBTQ+ content, one doesn’t need to view content as explicit as gay porn, because algorithms used by social media platforms can understand subtle trends in user preferences to make inferences about their personal life and what users are likely to buy.
I know my devices probably know me better than I know myself. My browsing history, consumer choices, and posts I engage with all inform the various algorithms serving the feedback loop that provides me with new content. If I were to browse a friend’s Instagram account, I know my user experience would be very different than the one I’m used to. Observe the suggested content that algorithms produce for you. Then, delete the cookies in your browser and view generic content such as current events, sports, or history. Think critically about the biases presented to you.
So what can be done to hold YouTube (owned by Google) and other corporations accountable for the content their algorithms produce? Some propose harsher regulations on content or sources of funding. Countries like Estonia have implemented a nationalized internet service. Essentially, the Internet needs to better represent the interests of the consumer in the spirit of advertisers tailoring their suggestions to appeal to the user, but in every facet of the user experience. When the internet is no longer controlled by private corporate interest and net neutrality is realized, an internet of the people for the people can take root. Consumers, especially impressionable students, should understand why they are viewing the information presented to them and not just accept it at face value. Let internet algorithms teach us a lesson in critical thinking so we can better analyze information we receive in and outside the classroom.
Featured Graphic by Anna Tierney / Graphics Editor