Today we incorporate more data-driven decision making into public life, and many people are uncomfortable with the secrecy behind those decisions. Algorithmic systems are affecting real outcomes in the lives of citizens and customers, and people are taking notice. The desire for transparency in data analysis has even lead to serious legislation. Still, companies are slow to explain their work in detail.

Against Transparency

There are many reasons that companies and governments hesitate to release their algorithms. The most common reason is that algorithms are often trade secrets. They represent years of experience and research in just a few lines of code. Another reason is that explaining algorithms reveals too much of a company’s strategy. For example, Google keeps its site ranking secret to prevent anyone from exploiting them. Frequently Google even makes changes to stop people from gaming their system.

The concerns of consumers are equally valid. They want to know why they are targeted with certain ads, or why they are considered a criminal risk.

But the other problem is that these models aren’t designed for transparency. Prediction and classification models can make determinations about a likely outcome, but they can’t explain the impact each factor had on the decision.

Explaining algorithmic decisions to the end consumer can be difficult.

In order to continue employing data-driven decisions, companies and governments will have to work towards more transparency. One suggestion is that analysts and data scientists could work backwards to show how predictions are made, but there are significant costs to working backwards through a predictive model.

Transparency by design

A better long-term solution will be to build transparency into the analysis from the beginning. Not only does this create transparency for the individuals, but it also gives ongoing guidance every step of the analysis process. The analysts themselves see deeper into their own work.

Instead of opaque formulas, approaches to analysis like EmcienPatterns create transparency throughout the process through reason codes, the reporting function that outlines how predictions were made, including the counterpoints or “losing” arguments. What was once a post-mortem on failed projects can now be corrected in-flight.

Here is an example of how EmcienPatterns opens up the results of the algorithm, showing the impact of each data point.

Today the outcomes from many data-driven systems simply can’t be explained. But now there’s an alternative. More transparent algorithms show what’s happening behind the curtain. Users know more about what the systems are doing and can create transparency for the consumer as well. Future advances in data and analytics will have to keep pace.