Authors
-
Senior Advisor, BSR
-
Michaela Lee
Former Manager, BSR
-
Director, Technology and Human Rights, BSR
As business, government, and civil society make progress in addressing the human rights impacts and ethical questions arising from the use of artificial intelligence (AI), we believe that one supremely important constituency needs to participate much more actively: the “non-technology” companies integrating AI into their business operations, strategies, and plans.
Without these participants, dialogue about AI and human rights risks being too focused on the development of AI, with insufficient attention given to the companies deploying AI. Our aim with this blog is to explain why.
In August last year, BSR published three reports setting out the importance of taking a human rights-based approach to the development, deployment, and use of AI. Since then, we’ve put this advice into practice in our work with BSR member companies in the U.S., Asia, and Europe to develop policies on AI and human rights, engage with civil society organizations, and undertake human rights due diligence of AI solutions. We’ve had the opportunity to consider a wide range of scenarios—including algorithmic decision making, facial recognition, and sentiment analysis—as well as a wide variety of application areas, including retail, national security, human resources, and transportation systems. As can be imagined for technologies that are evolving so rapidly, it’s been a time of extraordinary learning.
In many ways, this work has confirmed predominant assumptions that exist around how technology companies can fulfill the responsibility to respect the human rights impacts arising from the use of their products and services. Across many different settings, our recommendations have coalesced around some common themes: scrutinize the quality of training data, examine to whom you sell products and services and refuse sales to those most likely to misuse them, and establish acceptable use policies that place restrictions on how products and services may be used. We’ve also recommended system-wide approaches, such as advocating for rights-protecting laws and regulations, increasing disclosure and transparency, and providing best practice guidance for users.
These are all important responsibilities held by technology companies, and nothing that follows should suggest otherwise. However, we’ve found that the common thread running throughout these recommendations is the notion that technology companies should use their leverage to prevent the misuse of products, services, and technologies by influencing the actions of others—and that no matter how much effort is deployed, there is no guarantee of success.
Decisions made today about the deployment of AI will bring significant consequences for the realization of human rights long into the future.
This observation has led us to one simple question: In addition to trying to influence the actions of others, shouldn’t we also be working more directly with the companies, governments, and organizations that are directly deploying AI themselves? In the terms of the UN Guiding Principles on Business and Human Rights, why would we spend most of our time working with the companies that are contributing to or directly linked to human rights impacts, and much less of our time working with the companies that might be causing them?
- In the retail industry, stores are deploying AI for theft protection, creating new privacy, security, and discrimination risks, especially for vulnerable populations and marginal groups.
- In the transportation industry, airlines and airports are deploying facial recognition during the boarding and screening process, raising important issues of consent and non-discrimination.
- In the automotive industry, car companies are collecting more location data than ever before and sharing them with governments, provoking new questions about whether automotive companies should join with technology companies in publishing law enforcement relationship reports.
- In the hotel industry, facial recognition technologies are being used to ease the check-in process, impacting rights such as freedom of movement.
Companies in all these industries should be taking a human rights-based approach to their use of AI.
The second of the three reports we published last year on the importance of a human rights-based approach to AI anticipated these issues. In it, we listed the human rights risks and opportunities arising from the use of AI in the financial services, health care, retail, transportation, agriculture, and extractives industries, and proposed sector-wide impact assessments for each industry. One year on, we are doubling down on this point of view, such as during our recent participation in the Skoll World Forum, Sustainable Brands Paris, and our own BSR Connect events.
Decisions made today about the deployment of AI will bring significant consequences for the realization of human rights long into the future—and this means that undertaking due diligence of AI across all industries now is a matter of urgency and not something that can be put off into a distant future. Today, we are joining the Partnership on AI, and we look forward to making good on this perspective by working more closely with the Partnership on AI and BSR member companies across all industries to assess the human rights impacts arising from their use of AI.
Calling for more non-tech industries need to get involved with AI at the concept and development stage. @dunstanhope from @BSRNews was speaking at the #skollwf session on #ai #datascience and #humanrights. pic.twitter.com/jfRAZNyOQm
— Skoll Foundation (@SkollFoundation) April 10, 2019
Topics
Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.