Big Data and Human Rights: 5 Frequently Asked Questions

September 20, 2016
Authors

We have moved well beyond the time when big data issues were relevant only to Silicon Valley tech companies. Now, any company that collects information and engages in big data analytics—be it a retailer, job recruitment site, manufacturer of a connected appliance, or healthcare company—is collecting, storing, and analyzing enough data to be affected by positive and negative impacts of big data collection. Those impacts may include serious human rights concerns, and because almost all companies are storing and processing large amounts of data, almost all companies will eventually grapple with this issue.

To help companies make sense of big data and human rights, we have compiled answers to questions that we frequently hear from our members and partners.

What is big data?

Big data is the ability to capture, aggregate, and process an ever-increasing volume, velocity, and variety of data. The type of data being collected, stored, and processed is infinite in its variety and purpose, ranging from consumer spending and retail habits, to taste in music or books, to sensitive patient and healthcare information, to financial information. 

What companies do with this data is often a decision made by an algorithm, which is a procedure or set of instructions developed to solve for a problem. For example, your online shopping habits will generate data, which will be processed by an algorithm that will then determine what advertisements are most appropriate for you.

How does big data relate to human rights?

Concerns over privacy and discrimination are the two main human rights issues emerging out of big data collection. Within these two issues, the rights of vulnerable groups are an especially important concern. We have made progress in the offline world around civil rights, and we do not want to undermine that progress by automating discrimination and unfairness.

How does big data involve the right to privacy?

Consider the following example concerning a customer’s privacy from a 2014 White House report on big data: As connected homes become more common, internet-of-things appliances like smoke detectors and other environmental sensors will begin collecting data on conditions within each user’s home. What if an insurance company is able to purchase this data and determine that a new customer is a smoker and thus refuse to insure this person? Is the collection of this data by the smart-home appliance a violation of the user’s privacy, or is use of that data by the insurance company a violation? These are important questions that can have serious consequences if the implications of data collection and sharing are not considered.

How does big data affect the right to non-discrimination?

Some assert that algorithms, by removing the human decision-making process, will reduce discrimination—but algorithms can have discriminatory decisions built into them. Take, for example, this concern raised by Eric Null, the policy counsel at New America’s Open Technology Institute, about how big data on crime may have positive or negative effects:

“Predictive policing technologies can be used for good or to perpetuate injustice. Algorithms fed with racially biased data will merely perpetuate the biases already inherent in policing and lead to more injustice. We already know that police data often incorporates only reported crimes, which means policing will become biased toward those areas.”

In this case, the inputs to the algorithm may be based on data that is not reflective of overall crime trends. On its surface, the process of data analysis by algorithmic equations may seem like it should not result in discriminatory outcomes, but algorithms are only as good as the data they analyze. 

What can companies do to prevent unintended human rights consequences from their use of big data?

Big data is not a problem in and of itself, but unintended consequences, such as privacy violations and discriminatory practices, are. The algorithms that process big data play an ever-increasing role in influencing major decisions that affect our lives, and since governments are only now beginning to address the due process aspects of these systems, companies need to implement measures to identify and mitigate potential human rights risks.

To do so, companies can do the following:

  1. Determine who is using big data within the company and for what purposes.
  2. Undertake a human rights impact assessment to determine if there are potential human rights consequences to how that data is being used.
  3. Connect with thought leaders, such as the Center for Democracy and Technology, to understand the latest concerns around big data collection and its human rights impacts.
  4. If human rights impacts are identified, create an action plan with the employees that own or develop the applications that use big data.
  5. Make recommendations on how those impacts can be mitigated or eliminated while maintaining the usefulness of the application.
  6. Be transparent and disclose the data the company collects and the mechanisms used to process and analyze that data.

Transparency in particular is important for two reasons. First, sharing insights and lessons learned with industry peers and the human rights community can promote positive human rights outcomes. Second, the public has a right to know what data is being collected and how that data is being processed in order to assess the validity of the data and the fairness of the conclusions reached.

By understanding how your data is being used and what the outcomes are, whether intended or not, companies can avoid potential negative human rights impacts while harnessing the power of big data for better analytics and outcomes. Walking through the steps outlined above can help companies capitalize on human rights opportunities while avoiding the negative consequences of big data.

Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.

Contact Us

You Might Also Like