An Algorithmic Walk

by Morten Jensen Øllgaard

This week’s seminar challenged the participants to devise an algorithm that could act as a tour guide for the situated STS seminar. Our first instruction was to meet at the Martyrs Memorial, at the intersection of St Giles, Magdalen Street and Beaumont Street in Oxford. We managed to meet somewhere on the south side of the Memorial at the usual time at 4.30pm last Thursday.

Google’s obscure algorithm

First, Malte Ziewitz gave us a short briefing on his research on the practices of search engine optimisation and explained how an entire industry has developed thanks to Google’s obscure algorithm. The complexity of the algorithm running Google’s search engine makes it difficult for ordinary people and businesses to figure out what the algorithm actually does and how to achieve a good ranking on Google. Given the importance of a good ranking many businesses, hire consultants who are specialized in search engine optimisation to help them perform better in various searches.

I decided to see if I could find out just how big a share of the market for search engines Google’s has, trying to understand the importance of a good ranking on Google. I ‘googled’ “search engine market share” and got 55.400.000 results in 0.18 seconds! What I could discern from the different homepages I found via Google was that Google’s global market share is probably more than 80%, whereas Google’s market share in the U.S. is approximately 65% (some uncertainty arises here because the Americans tend to mix up what is global and American). This little exercise made me realise the difficulty in verifying these numbers, as the great many different sites that turned up in my search used different numbers and statistics. I must also admit that I did not care to go beyond the first page of my Google search, anticipating that I would not find a more authoritative answer among the remaining 55.399.990 results, just more numbers. (Actually, I cannot remember the last time I went beyond the first page on a Google search…) In this case Google’s algorithm ranked certain homepages over others, so the question is how we should evaluate this algorithmic intervention? Was it good, bad or neither of the two?

Can algorithms be wrong?

On the seminar’s reading list was an interesting blog post by Tarleton Gillespie, discussing if algorithms can be wrong (link). The blog post uses the dispute over why Occupy Wall Street did not make Twitter’s trends list as a starting point for discussing potential political and moral consequences of algorithmic interventions. The question is: was Occupy Wall Street censored and deliberately removed from the trends list or was it simply not trendy according to Twitter’s algorithmic definitions? We do not know as Twitter has declined to disclose its trend algorithm to protect its business and keep people from speculating in creating trends (that are not real trends, whatever that is). Would it have made a difference if Occupy Wall Street had made it on to Twitter’s trends list? The people behind Occupy Wall Street seem to think so.

I found the comments attached to Gillespie’s blog post to be both very interesting and entertaining as they represent different stands on the issue. In general, the commentators agree that the ‘algorithm reality’ is a particular perspective or view on what is trendy, but their agreement ends here. One comment stipulates that Twitter’s algorithm is a messy and complex piece of software developed over the years and consequently no one should be held accountable for the actions of the algorithm. Some discuss the technical possibilities of constructing so-called ‘open algorithms’ that promise less bias, abuse and misunderstandings. Others do not see what the fuss is all about, as they find Twitter’s trends to be mundane and unimportant. Finally, there is one person claiming it is all one big conspiracy as Twitter, Facebook, Google etc. are all controlled by the wealthy elite, who do whatever they can to reduce visibility of anything they do not like. This commentator offers us a red truth pill and points to parts of the Internet, which do not show up on Google’s algorithm reality.

The victory of the minimax algorithm

The second item on the reading list was an article by Nathan Ensmenger titled “Is chess the drosophila of artificial intelligence? A social history of an algorithm”. The paper explores the link between the game of chess and the development of artificial intelligence (AI). Starting in the 1970’s and up to the defining moment when Deep Blue defeated Garry Kasparov in 1997, Ensmenger demonstrates how chess became the experimental technology within the AI research practices. In that respect chess is similar to Drosophila, which was the experimental technology of the genetic sciences.

Both the game of chess and the record-keeping communities of chess players turned out to be a good match for AI researchers. Chess was perceived to be a game that requires some “thinking”. Simultaneously, it is a finite game with a finite number of positions and moves, ensuring the game will eventually end in a conclusive way (win, draw, or loss). In the early stages of the development of the chess computer, Ensmenger tells us, there were two competing algorithmic principles. The ‘Type-A’ algorithm, also called the minimax algorithm, which uses a brute-force method, and the competing ‘Type-B’, which was considered more “human” as it used heuristics to make decisions, trimming the decision tree by privileging certain branches over others. In the end, the minimax algorithm prevailed despite the fact that it was considered to be an inaccurate reflection of the ways in which human beings played chess. However, the minimax algorithm turned out to be fastest way to reach the goal set by the AI community: to beat the best human chess player. To cut it short, Ensmenger’s point is (a) it could have been otherwise and (b) that software algorithms like minimax or Twitter’s trends and Google search are parts of heterogeneous environments, therefore it is meaningless to isolate these algorithms from their social, economic, and political contexts.

The good walking algorithm

Both Gillespie and Ensmenger conclude that we need to develop a language and methodologies for studying and speaking about algorithmic interventions. It was against this background that we were asked to discuss and devise an algorithm that could act as a tour guide for our situated seminar.

Taking a walk around Oxford for about an hour seems a simple task, but having to put it on an algorithm form was a reminder that practical everyday activities like walking entail things that are difficult to articulate, especially in an algorithm. This ethnomethodological lesson was frustrating as it turned out to be quite hard to create an operational algorithm suitable for our purpose.

A complicated crossing

Our discussion was centred on two main topics: (a) the algorithm had to provide a decision rule, which could produce a definitive output and (b) it had to be something we could remember. We came up with a walking algorithm consisting of two main components:

  1. The intersection rule: We would toss a coin at every intersection we encountered. If the result were heads, we should walk to the right. Tails and we would walk to the left.
  2. The pub rule: We would enter the third pub encountered on our path.

The coin flipOther rules were suggested but they were either non-operational or conflicted with other interests. The application of the walking algorithm rules where subject to some discussion. For instance, there was some debate over what constitutes a proper intersection. The different interpretations were clearly influenced by the different interests of the participants. Some wanted to attend a lecture at 6pm at Green Templeton College, nobody was interested in walking all the way to Banbury, and all of us wanted to experience an algorithmic walk. Whether it was conspiracy, luck, or just really good design, the algorithm managed to devise a route that took us around the city and ended at a pub, leaving enough time to have a beer and be at Green-Templeton at 6pm.

The algorithmic route

Here is the route devised by the walking algorithm:

The route

I can warmly recommend taking an algorithmic walk if you get the chance, as it is good fun and interesting at the same time.

This session was part of the ongoing reading group Encountering Science and Technology Studies: Situated Seminars. Rather than discussing readings in the confines and comfort of a seminar room, we immerse ourselves in locations that speak to the issues at hand. For upcoming sessions, please check the programme.

2 thoughts on “An Algorithmic Walk

    • You are probably right about encountering pubs. However, you must also consider the other interests, especially the common interest in experiencing an algorithmic walk (opposed to sitting, talking, drinking and a casual game of dart).

Join the conversation

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s