Fully automated mistakes: the potential pitfalls of the algorithm

Written by
Tom Ritchie

31 Aug 2016

31 Aug 2016 • by Tom Ritchie

Automation is an ever growing aspect of working life. Be it issuing a life insurance quote or clicking a button to order more loo roll, machines and algorithms are becoming ubiquitous. 

A few months after it was revealed that Facebook’s Trending news function was curated by a team of editors that had been supressing ‘conservative’ stories, the social network giant announced on Friday that it had fully automated its aggregated newsfeed. 

The results thus far have been calamitous. The algorithm, partially based on the behaviours of the editorial team’s decisions, has included a fake news story about Fox News presenter Megyn Kelly and a controversial attack on political commentator Ann Coulter, amongst other missteps. Both news stories came from sources that had previously been ignored by Facebook’s editors, as they used a list of reliable sources to populate the feed.

In an unattributed blog post on Facebook’s newsroom, they said: “Our goal is to enable Trending for as many people as possible, which would be hard to do if we relied solely on summarising topics by hand. A more algorithmically driven process allows us to scale Trending to cover more topics and make it available to more people globally over time.”

While it may be more efficient and financially prudent to roll out a fully automated feed, the loss of a qualitative eye of an experienced editor could lead to negative publicity. The algorithm used in such instances can only be as unbiased as the data it uses.

Dr Emmanioul Gkeredakis is assistant professor of information systems at Warwick Business School. He believes that a news algorithm such as the one used by Facebook must be regularly checked before the service can become fully automated. “The question that needs to be asked by non-traditional news sources is: is the dataset the algorithm uses comprehensive enough to account for different forms of discrimination?

“It is important to keep an eye on what the algorithm returns and conduct ‘fairness checks’ before you roll out full automation. Unintended and new forms of discrimination are always possible.”

With such a heavily edited process such as news aggregation, it’s understandable that full automation has led to blunders. While Facebook may have been caused embarrassment by its algorithms gaffes, the stakes are not quite as high. With Uber planning to implement a driverless car service in Pittsburgh, the risks of getting automation right are becoming greater.

Dr Gkeredakis commented: “It is important to consider the cost of even a single mistake. For example, the cost of a single mistake by driverless cars, no matter how good your algorithm seems to be at predictions, can be huge. It is important to consider whether failures of algorithms are posing huge risks for a business. It is prudent to avoid full automation when the risk of an accident is very high and costly.”

The steady march towards an automated world is well underway, and in many ways is making modern life easier. But rather than ushering an age of fewer jobs for editors, taxi drivers and manufacturers, it seems that the keen-eye of a qualified professional is still a commodity.