Guests
-
Director, Technology and Human Rights, BSR
Hannah works with multinational companies to align business and human rights strategies and facilitate incorporation of sustainable practices into business operations across sectors.
She focuses on the intersection of human rights and new, disruptive technology and leads the Tech Against Trafficking collaborative initiative.
Prior to joining BSR, Hannah worked with the Skoll Foundation, where she co-led the portfolio and investments team’s efforts to identify social entrepreneurs with the potential to drive large-scale social change. Her work led to over US$20 million in grants and investments between 2015 and 2018. Before Skoll, Hannah spent six years working in anti-human trafficking in West Africa, Southeast Asia, and the Bay Area. She is fluent in French.
Hannah holds a Master’s in NGOs and Development from the London School of Economics and a B.A. in Political Science and French from the University of Michigan. She currently serves on the advisory boards of Oxfam’s Women in Small Enterprise initiative and Convening17.
Recent Insights From Hannah Darnton
- The Human Rights Impacts of AI / June 4, 2024 / Audio
- The EU AI Act: 11 Recommendations for Business / May 21, 2024 / Blog
- The EU AI Act: What It Means for Your Business / April 25, 2024 / Blog
- A Business Guide to Responsible and Sustainable AI / March 27, 2024 / Insights+
- A Human Rights Assessment of the Generative AI Value Chain / February 9, 2024 / Blog
-
Managing Director, Marketing and Communications, BSR
David leads BSR’s marketing and communications initiatives, working with a global team to amplify the organization’s mission and showcase its activities, impacts, and thought leadership to members, partners, and the wider business and policy community.
David previously worked for The B Team, a group of global business and civil society leaders working to catalyze a better way of doing business for the well-being of people and the planet. Throughout his 20-year career, he has worked with businesses and nonprofits in economic development, public health, and sustainability to define and communicate their purpose and impacts. .
He has built high-impact communications campaigns for a collaboration to improve maternal health in Zambia and Uganda, driven top-tier media coverage for a major economic development project in upstate New York, and helped strengthen parliamentary capacity and voter education efforts in South Africa and Zambia. He began his career as a newspaper reporter.
David earned his M.A. from The Elliott School of International Affairs at the George Washington University and his B.A. in Journalism and Political Science from Michigan State University.
Recent Insights From David Stearns
- Reflections from Climate Week NYC: The Tension Between Pragmatism and Ambition / October 1, 2024 / Audio
- Navigating U.S. Election Uncertainty: A Call to Action for Sustainable Business / August 1, 2024 / Audio
- What the SBTi Battle Portends: The Decisive Decade Becomes the Dilemma Decade / June 17, 2024 / Audio
- Responsible and Sustainable AI / June 4, 2024 / Audio
- Regulating AI / June 4, 2024 / Audio
Description
Hannah Darnton, Technology and Human Rights Director, chats with David Stearns on the Human Rights Impacts of AI, exploring:
- How the use of AI may impact human rights and where we might see examples of this.
- Are there ways that AI can be used as a tool to protect human rights?
- In BSR’s experience, are companies receptive to conducting human rights risk assessments of their AI practices?
- Recommended resources for companies looking to get started.
Listen
Transcription
David Stearns:
Welcome to BSR Insights. I'm your host, David Stearns. In this series, we'll be talking to BSR experts from across our focus area and industry teams to discuss the environmental and social implications and opportunities arising from the increasing development and deployment of artificial intelligence in the global economy. We're joined today by Hannah Darnton, Director of Technology and Human Rights at BSR. Hannah works with companies across the tech sector to integrate human rights-based approaches into company policies, products, services, and strategies. Hi, Hannah.
Hannah Darnton:
Hey, David. Thanks for having me. Excited to chat today.
David Stearns:
Great to have you. This is going to be a really interesting conversation on the human rights impacts of artificial intelligence, and I thought we could start with a level-setting question around what are some of the ways that the use of AI by different businesses can impact human rights. So, can you tell us what are some of the issues and what are some examples of where human rights issues might arise for companies?
Hannah Darnton:
Oh, there's so many. I think one of the things that we think about a lot is that the UN Declaration of Human Rights really helps provide us with a list of rights that we can assess against to really help gauge where there are risks and potential adverse impacts to people. And so we use that as a starting point, and pretty much every one of those rights that is listed in the Declaration of Human Rights can be impacted in some way, shape, or form by AI. So it's really where you're sitting in the value chain that we have to consider first. Are you a deployer of AI? So you are using AI in your business, in a retail shop, in a pharmaceutical company or healthcare company to develop drugs. Are you thinking through how you as a trucking company are mapping your most efficient routes? Lots of different use cases for AI.
On the flip side, we also have the developers of AI, the Googles, the Microsofts, the Amazons of the world, those big tech companies that are helping put together different AI services, tools, products that might be used. And so the impacts are different depending on where you are in the value chain. If you're looking at some of the developers of technology, they're the ones creating the AI models. And so if they are creating those in a way that isn't considering, are these fair, are these representative, there might be impacts related to discrimination or bias or potentially just lack of equitable inclusion of different populations. Those are all things that can be really front of mind for us. Others are privacy. So the data that they're using, was it collected in a way that was privacy preserving for a lot of the populations they were engaging, all rights that come top of mind for us.
On the other side, if we look at those deployers of technology, so those retail companies I was mentioning, the healthcare companies, et cetera, they might impact human rights in different ways. So the ways they're using their AI technologies might collect data about those that are walking around their retail stores. It might make decisions based on their employees that they're working with. It might actually output information to a customer in ways that are discriminatory or unfair. So lots of different ways that they can impact populations based on where you sit in this deployer versus developer kind of value chain.
So the use of technologies can really have significant human rights implications, particularly on vulnerable, marginalized communities. We often think about privacy, right to non-discrimination, freedom of expression, but AI can also have implications on a much broader suite of human rights, including right to health, access to culture, access to science, child rights. And these impacts really vary based on how AI technologies and product are designed, developed, and used.
So if we think about a couple of examples, one example might be a company that's using AI and facial recognition to assess someone's skin tone to match them with a certain makeup. And let's say they're using training data to develop their technology or their product and the training data they're using is perhaps non-representative. It doesn't include everyone that might actually use the product in question. And so the matching system doesn't end up working or doesn't end up working as well for people of color. That's just one example of how a system, a product, may not work equitably for all users or it may discriminate against certain individuals or groups.
Now let's take that same example and say your tech company is actually trying to improve that matching technology. And so they're collecting information, they're trying to create a more representative database to train the AI model so that they now have a database with people of color that actually works better for all users across ethnicities and across races. But then they end up having a database with people of color with their personal information, their facial biometrics, and that can have massive privacy implications. How long is that data held? Did they provide consent for all different use cases? What can that data be used for? These are all questions that come up. There's even questions around law enforcement. So law enforcement, what happens if they request information on, for example, all women of color living in a certain area that have been included in that database? Those are all things that can come up that may have potential impacts on human rights and some of the things that we've raised as part of this discussion.
David Stearns:
Are there ways that AI can actually be used as a tool to protect human rights? Are there any use cases like that or do we not get into that yet in our practice?
Hannah Darnton:
Yeah, we often think about both the risks and opportunities of AI. So the risks I think can be at an individual or a societal level. And so I think we can think about individual risk of risk, what's personally impacting my right to privacy, my right to non-discrimination, my ability to access goods and services. But we also look at cumulative impacts. And so we can think about the use of AI over multiple different companies, kind of becoming more pervasive in our everyday life. We see a lot of cumulative impacts. So I'm constantly interfacing with AI services that may... or let's say facial recognition services that use AI to identify me as a person and then make decisions based on who's in the room.
So those are all different things that can impact broader swaths of society. But I think when we're looking at opportunities, so there are many ways in which AI can be used to realize human rights or to promote positive use cases. And so AI can be used for example, in identifying and removing child sexual abuse imagery online. It can be used to help find lost children. It can be used to identify ways in which to make certain systems more effective in ways that help us as employees be more effective in our work or reduce carbon emissions. There's lots of different ways that there are both positive examples of use cases of AI and then also ways in which it poses risks.
David Stearns:
So thank you for that. And coming back now to the ways that companies can mitigate against human rights harms or human rights risks, I know as a regular feature of the work that you do, you do a lot of human rights impact assessments based on the UN Guiding Principles on Business and Human Rights. I'm sure you do this on a pretty regular basis. And one goal of these, as I understand it, is to provide practical guidance to companies on how to identify, prioritize, and mitigate those risks. Can you talk a little bit about how you might approach a human rights assessment or human rights due diligence more broadly for a company based on their AI practices?
Hannah Darnton:
Yes. So we often work with companies to do exactly what you just described, to really undertake human rights risk assessments of their products, their services, their broader use of various technologies. And as we've looked at AI, we really helped to unpack the risks of specific use cases of AI. And that involves considering what is the technology itself? How is it being built, how's it being developed, how's it being designed? But then we also look at how it's being deployed, who's using it, where is it being used, and for what purpose, what use? And each of these things really adds in specific elements of consideration that help us better understand the risk profile. And then what we do is we take those different details that we've gleaned from better understanding the scenario that we're considering, and we unpack the specific human rights impacts, and that includes both the potential human rights impacts or the risks as well as those actual impacts that are occurring today.
And so looking at those, once we've kind of done that risk mapping to assess, "Okay, how are human rights impacted," we can then look at the ways in which they can avoid, prevent, and mitigate those risks. And it's a long list. There's lots of ways that companies should put things or can put things in place in order to help address risks. That really covers a spectrum of activities. So everything from putting solid terms of service in place that will limit use cases for nefarious purposes. It also means limiting potentially the actors that are accessing and using your AI products and services. It means putting in technical guardrails that can help think through or really help prevent any malicious use cases. And then it goes on beyond that to think about, okay, well, how do we design this in a way that actually avoids the risk from occurring in the first place?
How do we make sure that the training data that we're using in our AI model is representative, that it's inclusive of the population we're intending to serve, that it doesn't have inherent bias or discrimination built into it? And then it moves on to other items such as, "Okay, who's going to use it in the end? What does this mean when we roll it out either as an open-source model or as a product or service that can be used by a range of different customers or individual end users?" And that's where we get into some of the nitty-gritty elements of, "Okay, how should this be used when it's being used? How do we ensure that it's being used in the way that we originally intended?" So we often talk about the use of facial recognition, for example. We want to make sure that it's being deployed in ways that our safeguarding individual's privacy, that they understand when they walk in the room, if they're actually in a specific context or setting where their data's being captured and what it'll be used for.
Have they consented to that? What does that consent process look like? Do they have the ability to opt in and out? Now, there's different environments in which those considerations are going to be front of mind, right? If you go to an airport, there is an assumption that you will be in a protected space and that you are most likely under some form of surveillance in order to protect your own safety. But if you go into a school or if you go into a specific store, understanding that it just comes with a little bit of a different considerations. You need to know as a consumer what you're walking into and what you're consenting to, what data's being captured and how it's being used.
David Stearns:
I really like that analogy. That helps to really distill it for us. I'm wondering, this is a new... well, relatively new topic from any company. I know it's not for the tech companies, but for some of the companies that are deploying for the first time, some of them may be trying out different AI systems for the very first time. And I'm just wondering, any advice on where companies can learn more? Are there specific resources that might be available now or that might be coming out soon that you would recommend to business leaders who are wrestling with some of these questions to help them as a first starting point?
Hannah Darnton:
Good question. Yes, there are tons. It's more whittling down exactly what's most useful for what company, right? If you're deploying AI, that's going to be a very different set of resources than if you're developing it. If you're a developer, you're in the weeds. You're trying to figure out exactly how to create a useful, effective tool that also is non-biased, non-discriminatory, works as intended and is avoiding the risks that we just mentioned. I think that BSR is a great starting place. We have a lot of great resources on thinking through how to use and how to deploy and develop AI technologies effectively and in a rights-respecting manner. But I think there's also lots of other great organizations out there putting guidance out on the responsible development and use of AI, including partnership on AI. We have a lot coming out of the EU at the moment, OECD, we also have the NIST principles.
We have quite a few others as well. But I think a lot of times what we're seeing is this guidance is put out for, primarily for developers, and that the deployers of technology are not always in the mix as much as we'd like to see them. And I think this is where there's really a gap in the field and one that we've tried to fill by putting out our AI and human rights primers. So if you are a retail, healthcare, an extractives, a financial services company, what are the human rights impacts that you need to take a look at? What do you need to consider rather? And what are the primary mechanisms that you can put in place to make sure that you're addressing harms before they even take place? We have some materials that give a starting point there, but I think that there's a little bit more guidance needed for different parts of the ecosystem to make sure that they're paying attention to the right impacts and that they're addressing them in collaboration with the developers of technology.
David Stearns:
I appreciate that little plug for the BSR content. I actually have read a lot of those BSR industry primers for AI and human rights and found them very useful, so would also encourage folks to go to bsr.org and find those open-source pieces of content. I think they'll find them very useful. Before I get to our last question, I actually have another... I'm just curious to hear... in your conversations with companies, do you sense that they're generally receptive to this conversation about the need to consider human rights implications of this technology, particularly for the deployers, like you mentioned, that haven't been dealing with this on a day-to-day basis, but who may be sort of just coming to this and deploying these things and see this as a real... it's a nexciting new tool. There's lots of buzz in the media around it. Everyone's using it, people are playing with ChatGPT, but are they receptive to the idea that you can do this, but there are some other things you need to think about as you're going about this to get it right from the start, so to speak? And are you finding that companies are generally receptive to that conversation?
Hannah Darnton:
I think so. I think a lot of companies want to do it right. There's a business case for doing it right the first time and not having to go back and redo something. But there's also a lot of regulation that's coming now on AI and human rights or fundamental rights more broadly. And I think that's pushing companies in the direction of making sure they have this right, that they have the frameworks and processes in place to respect human rights in the first place. My colleague, I think will speak to this in another podcast, but the tech industry has entered a new area of regulation with several regulations shaping how the industry assesses risks, addresses adverse impacts, and discloses those risks to the public. And each regulation really focuses on specific issues. So freedom of expression, AI, privacy, some are tailored to tech, while others are a little bit broader.
And we've really worked with the tech industry to prepare for these various regulations. And as we've done so, it's become abundantly clear that human rights-based approaches, and particularly the implementation of the UN Guiding Principles on Business and Human Rights provide a common thread that ties them all together. And I think that there are a few key features that distinguish a human rights-based approach, and I'm happy to get into those. But I think that the value in those features in a human rights-based approach overarchingly are becoming abundantly clear as we enter this new era of regulation. And so while there was incentive before, there's even more incentive now to really get these processes, these approaches embedded within a company and help them really move more seamlessly or be integrated more seamlessly so that companies can align and streamline processes that help comply with these regulations.
David Stearns:
Good to hear that the companies have been receptive and not surprising, the bit on the regulation. And yes, Richard did give us a great overview of the new EU AI Act in particular, and looking at the ways that companies will be looking and being more heavily scrutinized around their use of this and having to be a bit more transparent. So we'd like to end all of our conversations with a question about what you see next, what's coming down the road, what you see around the corner, particularly for a rapidly growing technology like AI, which as we already alluded to, is getting so much buzz. Thinking maybe five years down the line, Hannah, what new things do you see that might be emerging that companies might want to start thinking about as an emerging issue around AI, the use of AI in their companies?
Hannah Darnton:
That's a great question. I think that there's a lot of things that are just constantly evolving. So I think we are in a period of time where every day is bringing new evolutions in AI technology, and I think generative AI is definitely one of them, but it's only one piece of the AI puzzle. And so I think we're going to consistently see new evolutions like that moving forward. I think each year is probably going to bring up a new type of technology that we need to confront, and it will bring new evolutions in technology that we need to think through how to best utilize them to advance human rights and societal opportunities. But we also need to be assessing the risk that they bring, and that means the immediate risk as well as the long-term risks. And I think just as a society, we have trouble often balancing those two simultaneously.
It's hard to keep those near-term risks in mind at the same time as those longer-term risks. And so I guess just really saying that, I think that we're just going to keep seeing more of the same in terms of evolution in technology, and we're going to need to create better structures and systems to assess risk on an ongoing basis and to make sure that we are actually putting resources behind advancing opportunities for AI. I think we need to think really concretely about how to do this comprehensively. The opportunities for AI are abundant, but we're not putting the resources behind them that we could and perhaps should. So we need to think about both how to advance those opportunities, how to assess and address risk, and make sure that we're holding both of those things in our minds at the same time.
David Stearns:
That is a wonderful note to end on. I think that's really prescient thinking and certainly gives us a lot to think about going forward. We look forward to seeing more from you Hannah on AI and human rights, and look forward to talking to you again soon.
Hannah Darnton:
Thanks, David.
David Stearns:
Thanks for listening. For more in-depth insights and guidance from BSR, please check out our website at bsr.org and be sure to follow us on LinkedIn.
Topics
Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.