Patterns

evergreen

Algorithmic Transparency

Algorithms that make their reasoning visible

The Context

We are all familiar with the recommendation algorithm – the dark forces that shove fad-diet-shills, flat-earthers, and Jordan Peterson into our feeds.

They've become a modern myth. There are a thousand moral panic op-eds raising the alarms; algorithms are controlling us, manipulating us, radicalizing us, and preying on our insecurities and "secret desires." All just to sell us overengineered toothbrushes, vegan whey powder, and blow-up paddling pools, among other goodies.

While it's unclear whether algorithms are truly "harming the children", there's still a few legitimate concerns in the ways we're allowed to interact with them in modern interfaces.

For a start, we aren't told what kinds of data go into our algorithmic systems, and have no visibility into why certain results come out. Most algorithm-driven interfaces blackbox their internal logic and decision-making rules. Faced with these opaque systems, we end up fetishising the algorithms – we attribute human agency and capacities to them, and in doing so hide the very real human agents designing and maintaining them. We call them "magic" to dismiss the need to make them legible to users.

As an audience we have no visibility into what metrics the engineers are optimising for. The engineers themselves don't always understand how the algorithms choose what to recommend. The lack of transparency makes it difficult to understand why we're shown certain content. It hides the fact some algorithms maximise for qualities like emotional outrage, shock value, and political extremism. We lack the agency to evaluate and change the algorithms serving us content.

The Pattern

When an automated system recommends a piece of content, it should include an

message explaining why it suggested it, and what factors went into that decision.

Transparency tips integrated into user interfaces like “Recomended because you liked Pride and Prejudice" or “Recommended because Mary Douglas read this” help us see the chain of logic.

Transparency alone isn't enough though. Users should have control over the data that feeds algorithmic systems they're subjected to. We should be able to to remove whole input sources, as well as individual pieces of content.

Pinterest does a good job of telling you why you're seeing certain pins, as well as giving you controls to remove the pin and see fewer like it:

They also have a "tune your home feed" settings page where you can remove specific pins, boards, or topics:

Despite being a notorious algorithmic bad boy, YouTube also gives you the ability to mark videos in your feed as "not interested" or "don't recommend channel", then follows up asking why you marked it.

They also offer a "remove this video" option on your watch history page:

Letting users remove single videos/posts/tweets/whathaveyous from their algorithmic soup is the bare minimum for any platform. We're a long way from meaningful transparency that makes internal algorithmic decisions clear and empowers users to design their own feeds.

Want to share?

Mentions around the web