Guests
-
Director, Technology Sectors, BSR
Richard works with tech companies—particularly those based in or with operations in Europe, the Middle East, and Africa—to build human rights considerations and practices into their products, services, and policies. He brings a strong understanding of international human rights law and standards and how to translate the corporate responsibility to respect human rights into practice for companies of different sizes and sectors.
Prior to joining BSR, Richard led the legal and policy team at Global Partners Digital, an international human rights organization focused on the impacts of digital technologies on human rights. He is also a trustee of the Kaleidoscope Trust, a UK-based charity that campaigns for the human rights of LGBTIQ+ people in countries where they are discriminated.
Richard holds a LLB in Law and European Law from the University of Nottingham and is a qualified lawyer in England and Wales.
Recent Insights From Richard Wingfield
- Regulating AI / June 4, 2024 / Audio
- The EU AI Act: 11 Recommendations for Business / May 21, 2024 / Blog
- The EU AI Act: What It Means for Your Business / April 25, 2024 / Blog
- A Business Guide to Responsible and Sustainable AI / March 27, 2024 / Insights+
- Why 2024 Could Be a Pivotal Year for Sustainability in the UK / March 26, 2024 / Blog
-
Managing Director, Marketing and Communications, BSR
David leads BSR’s marketing and communications initiatives, working with a global team to amplify the organization’s mission and showcase its activities, impacts, and thought leadership to members, partners, and the wider business and policy community.
David previously worked for The B Team, a group of global business and civil society leaders working to catalyze a better way of doing business for the well-being of people and the planet. Throughout his 20-year career, he has worked with businesses and nonprofits in economic development, public health, and sustainability to define and communicate their purpose and impacts. .
He has built high-impact communications campaigns for a collaboration to improve maternal health in Zambia and Uganda, driven top-tier media coverage for a major economic development project in upstate New York, and helped strengthen parliamentary capacity and voter education efforts in South Africa and Zambia. He began his career as a newspaper reporter.
David earned his M.A. from The Elliott School of International Affairs at the George Washington University and his B.A. in Journalism and Political Science from Michigan State University.
Recent Insights From David Stearns
- Reflections from Climate Week NYC: The Tension Between Pragmatism and Ambition / October 1, 2024 / Audio
- Navigating U.S. Election Uncertainty: A Call to Action for Sustainable Business / August 1, 2024 / Audio
- What the SBTi Battle Portends: The Decisive Decade Becomes the Dilemma Decade / June 17, 2024 / Audio
- Responsible and Sustainable AI / June 4, 2024 / Audio
- Regulating AI / June 4, 2024 / Audio
Description
Richard Wingfield, Technology and Human Rights Director, chats with David Stearns on the topic of Regulating AI, exploring:
- What the newly passed EU AI Act is, and what it regulates.
- Why the Act takes a risk-based approach and how different types of risks are categorized.
- What type of advice BSR is offering to companies on steps they should be taking to come into compliance with the AI Act.
Listen
Transcription
David Stearns:
Welcome to BSR Insights. I'm your host, David Stearns. In this series, we'll be talking to BSR experts from across our focus area and industry teams to discuss the environmental and social implications and opportunities arising from the increasing development and deployment of artificial intelligence in the global economy. We're joined today by Richard Wingfield, Director of Technology and Human Rights in the London office of BSR. In this role, Richard works with tech companies, and particularly those doing business in Europe, the Middle East, and Africa, to build human rights considerations and practices into their products, services, and policies. Thanks for joining us today, Richard. I'm looking forward to this conversation.
Richard Wingfield:
Thanks so much, David. Really good to be here. Looking forward to it too.
David Stearns:
So as we all know, the world's first major piece of regulation focused on AI, the EU's Artificial Intelligence Act, was recently approved by the EU Parliament in March, and leaders around the world are considering the implications for business, not just for tech companies and not just for those operating inside the boundaries of the European Union. So to start, and for our listeners who might not yet be familiar with this act, let's focus on some very basic level setting questions. What is the EU AI Act, and what exactly will it regulate?
Richard Wingfield:
So the EU's AI Act was first drafted about three years ago. So this is part of a much longer piece of work that the European Union has been doing, which is to regulate various aspects of digital technologies, building on pieces of legislation like the Digital Services Act and the Digital Markets Act, which were more focused on online platforms, but now the EU has turned its attention to artificial intelligence as a technology or as a cluster of technologies. And the goal of the legislation is primarily to create a single market for the flow of AI systems and technologies throughout the European Union. So in the same way that the EU sets standards on a whole range of different types of products and services to make sure that there are common standards within different EU member states, and they can be imported and exported and used consistently across the market, this is now the EU's effort to do so in the AI space, given that there's an increasingly large number of AI systems and technologies out there.
But it's not just the growth in the AI sector that's prompted this regulation. It's really because of many of the risks that AI systems and technologies pose to individuals in particular, but to aspects like human rights, to safety, even to big issues like democracy. And so a big part of this regulation is really to try to make sure that companies that are developing or using AI systems, develop them and use them in a way that mitigates those risks to people. So, it's partly an effort to create a free flow of AI systems across the European Union single market, but also to do in a way that makes sure that people are safe and that risks to people and their rights are prevented.
David Stearns:
That's a really helpful overview of what the act will be regulating. Can we spend a few minutes now talking about who specifically the act applies to, both at a macro level in terms of the types of companies that they are either using or deploying AI, and then maybe at a more micro level, who within companies will need to be paying attention to these regulations and the way they will be impacting the operations of their companies?
Richard Wingfield:
When it comes to the scope of companies who are going to be affected by the AI Act or potentially affected by it, it's pretty broad. So the AI Act doesn't narrow down the types of AI systems that are within its scope. There are different requirements depending on the level of risk that different types of AI systems pose, and I think we'll come onto this a little bit later probably. But in principle, any AI system is covered by the definition of AI within the AI, which is very broad. When it comes to the different types of companies in the AI value chain, again, the AI Act is pretty broad in its approach. So the main focus is really on the companies that are developing the AI systems in the first place, developing the AI technology, and then putting it in place on the market or selling it on to others in the European Union, and that's really because they are best placed to develop the technologies and systems in a way that mitigates those risks.
But there are also rules and other types of companies that are a little bit further down the value chain. So if you are a company that is deploying or using an AI system, even if another company has created it, maybe you've purchased it from a third party, for example, rather than developed it in-house, there will still be rules and obligations that applied to the AI Act, that will apply to the companies that use AI systems and technologies, as well as those that develop them. And so in some ways it goes a little bit further than other types of EU regulation, which exclusively focus on the manufacturers or the developers of particular products here. There are some regulations for those that use them in practice as well.
I think on the question of the different functions and the different teams that are going to be most affected by the regulation, it is mostly going to be on product developers, on engineers, on data scientists, so anyone within the company who is involved in the creation of the AI system in the first place because they're the ones that can actually make the tweaks, modify the functionalities, the features, do the data testing, and so forth that's going to be required under the AI Act. But given the other aspects of the regulation, the other requirements that go beyond the very formal technical standards that the AI Act will set, there will be other teams that will need to be involved as well.
So, an obvious one is going to be the compliance or legal within a company because they're obviously going to want to make sure that they're not breaking the law in any way. It may well be that privacy and data protection teams are also involved because a big focus of the AI Act is on the way that data is collected and processed when you are developing AI systems, and so they're likely to be involved as well. But from a sustainability perspective, human rights teams may also want to be part of the internal discussions around AI Act compliance, and that's because many of the requirements within the AI Act touch upon human rights and human rights impacts. And so, an understanding of human rights is going to be pretty critical if you want to make sure that you are complying with those particular provisions. So quite a broad range of different teams within companies are going to have to work together, I suspect, to make sure that they are fully in compliance with the regulation.
David Stearns:
And I should add that BSR has recently published a number of pieces, particularly for the non-tech deployer type companies, the consumer goods or the financial services companies that are using AI that I think many of the listeners might find of interest.
So in your recently published blog on the act, which actually does a great job of detailing much of what you've just described, you've identified this regulation as being risk-based in its approach. I was wondering if you could tell us a little bit more about, what does it mean to take a risk-based approach? And how does the Act categorize the different types of risks that might arise as a result of this technology?
Richard Wingfield:
The approach taken by the AI Act is to categorize AI systems into one of four tiers of risk. The most significantly risky types of AI systems are going to be prohibited entirely. And this will be the first really significant instance at a national, or in this case, regional level of particular types of AI just simply being prohibited. Now, these are quite extreme examples, many of which don't happen at all within the European Union at the moment, but could be foreseeable. So the kind of things that are being covered here are AI systems that use deceptive techniques that influence people's behavior in a way that causes themselves harm, so quite extreme examples of AI systems that for the most part we don't tend to see or are unlikely to see in the near future. And those prohibitions will come into force in six months. So they're the earliest provisions that the AI act will start requiring companies to consider. So by the end of 2024, those prohibitions will be in place.
The next tier is what are designated as high-risk AI systems and these, in many ways, are probably the most important ones from the perspective of the AI Act because these are the ones that are going to be subject to the most extensive requirements to make sure that those risks don't materialize in practice. And so there are a number of different designated examples of what will be considered to be a high-risk AI system. They include things like the use of biometrics, the use of AI in the delivery of critical infrastructure like energy, the use of AI systems in high-risk areas like recruitment or educational decision making, law enforcement, the administration of justice, and so forth. So there are certain areas or use cases of AI that are going to be designated as high risk.
And as I said, for these ones, the AI Act sets quite a demanding set of obligations around what companies who are developing these AI systems need to do. And that covers everything from risk management systems, keeping technical documentation, making sure that there's human oversight of the AI system in practice, making sure that there is testing of the data that is used to make sure it's free from biases, even when the product is on the market, making sure that there are systems in place to identify any risks that materialize, that documentation is kept around its sale and so forth. So there's quite a lot of requirements that are there to really make sure that the product or the system or the technology that's ultimately placed in the market is one that's safe and is going to be as risk-free as possible. Now that's going to take a long time for companies to adapt to, and so that's why the European Union has given companies three years to meet those new expectations. So it's not going to be until 2027 that we start to see those expectations turn into standard company practice.
The third category, if your AI system is not prohibited and it's not high risk, you may still have a requirement to be transparent about its use, and that's if it's an AI system that might fool people into thinking that they were interacting with a human being or that the generated content of an AI system looks realistic enough that people might be fooled by it. So really, you're looking at things like AI powered chatbots, whether that's by text or increasingly by audio, the use of generative AI to create imagery and videos, for example. And here, the focus is on just making sure that people are aware that they're interacting with AI. So the requirements are pretty limited to transparency, making sure that people know that they're interacting with AI and not a human being.
Now if your AI system doesn't fall into any of those categories, it's not prohibited. It's not high risk, it's not one that's likely to fool people, it will be considered a low-risk AI system for the purposes of the act. That means that there'll be no mandatory requirements in relation to that system, but the ACT does encourage the development of voluntary standards and guidelines. So you may well see companies who want to align with the highest standards and the best practices still committing themselves to equivalent requirements around things like testing data, identifying risks, ensuring transparency even for those low-risk AI systems. But as I said, they won't be mandated by the AI Act.
David Stearns:
So this is all pretty new. The act was just approved in March, and if I'm understanding correctly, many of them have not come into effect yet. There's still a few months before many of the measures actually come into full effect. So there's time for companies to start to get on the right side of it. And I'm assuming that you've already been fielding questions from BSR members who have been asking you or your colleagues on the tech and human rights team, "What do we need to do? What are the first steps we need to take? Where do we start?" Can you give us a sense of some of the advice that you're offering to these companies? What are the first steps that they should be taking? And where do they go from there?
Richard Wingfield:
Sure. And the regulation is complex and it has been modified quite a lot over the legislative passage within the European Union. So the final law is dissimilar in many respects to the original draft, and so it's understandable why companies are still trying to work out exactly what the new law is going to require from them.
A few of the things that we are recommending to companies are, first of all, simply to put it together an inventory of your existing and planned AI use cases. Now, that might sound quite simple, but that's actually quite a difficult thing to do often because there are so many different teams and functions who will be using AI in different ways, and rarely is there a single internal process to keep track of all of them, but making sure that you are aware of what AI is being developed, maybe AI that's being used in the human resources team for recruitment. Maybe generative AI is being used by sales and marketing teams. So trying to get stock or take stock rather of all of the different types of AI that your company is using or developing, or likely to use or develop in the near future is a really good starting point. And then you can work through with your compliance teams to see whether any of those fall into the different categories of the AI Act and therefore what the requirements are going to be.
We'd also recommend that companies start to, if they're not already, undertake human rights-based due diligence of their use of AI. Now, many companies have human rights due diligence processes already, which look at different risks to human rights across the company. We would really recommend that they start to use those processes internally for any development or procurement or use of AI. And that will also help flesh out some of the high-risk cases and think through what the mitigation measures might be to ensure that risks to fundamental rights don’t materialize.
We also think that, and maybe this connects a little bit to the stop taking exercise, that companies that haven't done so already established an internal governance process specifically on AI. And that could be everything from an AI team within the company bringing together other internal stakeholders, it could be a completely cross-functional committee that's separate in some ways from product development, but essentially trying to make sure that the different functions and teams that are impacted by the AI Act or have a voice when it comes to the company's use of AI, are brought together, that there's some kind of governance process, ideally some kind of responsible AI policy instead of procedures that stem from that. And that will help them oversee and manage risks going forward as well.
David Stearns:
So thanks for that, Richard. Great to hear about the human rights-based approach and perhaps not surprising hearing that from someone from the tech and human rights team. I'm also curious to know whether or not the EU AI Act addresses any environmental considerations. Are there environmental regulations that are part of the EU AI Act or are those included in other regulatory frameworks that companies might already be aware of?
Richard Wingfield:
The AI Act doesn't actually contain anything specifically around environmental impacts, no. Despite being relatively comprehensive in so many aspects, there aren't provisions in it that specifically look at the environmental impacts of the AI systems that might be developed or used. The EU has tended to develop more bespoke targeted pieces of regulation on particular environmental impacts. Recently, there's the batteries regulation, for example, which looks specifically at the impacts that batteries can have on the environment and elsewhere. So there's no explicit reference to the environment within the AI Act. That said, there is an increasing recognition that within the human rights framework, the right to a healthy environment is part of that. And so it may well be that as part of company's considerations of the human rights impacts of AI, that they do think about environmental impacts that are part of the human rights framework as well, but it's not explicitly called out within the AI Act itself, no.
David Stearns:
That's really helpful insight. Thanks for that. So we're coming to the end of our conversation, unfortunately, but we do like to end all of our chats with a question that gives our featured speakers an opportunity to sort of talk about what they see coming around the corner, like what's next. So I'm wondering if you could maybe take a step back and maybe a little bit of a longer view at what's on the horizon for business and the regulation of artificial intelligence.
Richard Wingfield:
When you are looking at technology and particularly AI, it's always tricky to take a long-term view because it develops so rapidly. And in fact, if you'd asked me that question a couple of years ago, I think I would've said that we would likely see a number of other countries around the world copy in many respects, what the EU was doing with comprehensive AI regulation. In the same way that we had seen the development of the GDPR, for example, inspire a raft of data protection laws around the world. Interestingly, we haven't seen that. We haven't seen other governments rush to develop regulation which is similar to the EU's approach, and that's for a couple of reasons, I think. First, the economic situation around the world is pretty bleak at the moment, and so many governments are disinclined to impose new regulatory burdens on businesses or do anything which might stifle in investments or economic growth.
And that's particularly relevant for technology, where of course, a strong tech sector can be a really big part of the country's economy and economic growth. So, there's been no rush to impose new regulatory requirements on a growing sector, understandably. Secondly, because AI has developed so quickly and we're now seeing it used in so many different ways and different areas of life, I think there might be a feeling amongst many governments that a single regulatory framework for all types of AI simply won't work in practice. What I think you might start to see is more targeted regulation in very specific areas or very particular use cases of AI. So for example, the use of AI for diagnostics in healthcare or in the financial services sector.
So I think you might see increasingly targeted regulation developed by government, or even regulators themselves. I don't think we'll see many other countries rush to do something like the EU and have a sort of one size fits all regulatory framework for all AI use cases. And it may well be the case of the AI Act quickly needs to be reformed or reviewed to make sure that it remains fit for purpose. So ask me that question again in a couple of years and I might give you a very different answer, but that's my instinct at the moment.
David Stearns:
Well, you've given us a lot to think about, and we thank you for that. Appreciate you coming on, Richard, and look forward to chatting again sometime soon.
Richard Wingfield:
Thanks, David.
David Stearns:
Thanks for listening. For more in-depth insights and guidance from BSR, please check out our website at bsr.org and be sure to follow us on LinkedIn.
Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.