Guests
-
Director, Sustainable Futures Lab, BSR
Jacob leads BSR’s Sustainable Futures Lab, a new practice using strategic foresight techniques to help businesses engage with emerging issues that are reshaping the global landscape.
Before joining BSR, Jacob was the lead futurist in the New York office of Forum for the Future, where he used scenario planning and other futures techniques to develop sustainability strategy and drive innovation for leading businesses, foundations, and multistakeholder groups. Prior to that he worked at Adaptive Edge, a boutique strategic foresight consultancy, on collaborative scenario planning. Jacob began his career doing human rights research and advocacy at Human Rights First and the Center for Economic and Social Rights. He speaks English and French.
Jacob holds an M.B.A. in Sustainability from Presidio Graduate School and a B.A. in History from the University of Chicago.
Recent Insights From Jacob Park
- Between Two Worlds: Sustainable Business in the Turbulent Transition / September 19, 2024 / Reports
- Navigating US Election Uncertainty: A Call to Action for Sustainable Business / July 16, 2024 / Insights+
- Applying a Future Lens to AI / June 4, 2024 / Audio
- Beyond 2025: Setting Credible Sustainability Goals for Long-Term Impact / April 16, 2024 / Blog
- A Business Guide to Responsible and Sustainable AI / March 27, 2024 / Insights+
-
Managing Director, Marketing and Communications, BSR
David leads BSR’s marketing and communications initiatives, working with a global team to amplify the organization’s mission and showcase its activities, impacts, and thought leadership to members, partners, and the wider business and policy community.
David previously worked for The B Team, a group of global business and civil society leaders working to catalyze a better way of doing business for the well-being of people and the planet. Throughout his 20-year career, he has worked with businesses and nonprofits in economic development, public health, and sustainability to define and communicate their purpose and impacts. .
He has built high-impact communications campaigns for a collaboration to improve maternal health in Zambia and Uganda, driven top-tier media coverage for a major economic development project in upstate New York, and helped strengthen parliamentary capacity and voter education efforts in South Africa and Zambia. He began his career as a newspaper reporter.
David earned his M.A. from The Elliott School of International Affairs at the George Washington University and his B.A. in Journalism and Political Science from Michigan State University.
Recent Insights From David Stearns
- Reflections from Climate Week NYC: The Tension Between Pragmatism and Ambition / October 1, 2024 / Audio
- Navigating U.S. Election Uncertainty: A Call to Action for Sustainable Business / August 1, 2024 / Audio
- What the SBTi Battle Portends: The Decisive Decade Becomes the Dilemma Decade / June 17, 2024 / Audio
- Responsible and Sustainable AI / June 4, 2024 / Audio
- Regulating AI / June 4, 2024 / Audio
Description
Jacob Park, Transformation Director and head of BSR’s Sustainable Futures Lab, chats with David Stearns about Applying a Futures Lens to AI, exploring:
- Why futures thinking techniques can be particularly useful to discussions about responsible AI.
- How scenario planning exercises work, who is typically involved within a company, and how they might be applied to mitigate a potential outcome of AI such as worker displacement.
- Some of the opportunities and innovations offered by AI that have the potential to advance sustainable business.
- How futures thinking can help to address challenges emerging from the rapid advancement of AI.
Listen
Transcription
David Stearns:
Welcome to BSR Insights. I'm your host, David Stearns. In this series, we'll be talking to BSR experts from across our focus area and industry teams to discuss the environmental and social implications and opportunities arising from the increasing development and deployment of artificial intelligence in the global economy.
David Stearns:
We're joined today by Jacob Park, Director at BSR's Transformation Practice and the head of the Sustainable Futures Lab where he works with BSR member companies using a range of strategic foresight techniques to help companies engage with emerging issues that are reshaping the global economic landscape. Welcome, Jacob.
Jacob Park:
Thank you, David. Good to be with you.
David Stearns:
It's great to have you with us today, Jacob. In a recent edition of Insights+ you and your co-authors have laid out a number of key actions for companies that are looking to develop or deploy AI technologies to consider. Among those was a recommendation to apply futures thinking to better understand the broader societal impacts of artificial intelligence.
David Stearns:
In your role as the head of BSR's Sustainable Futures Lab, and as someone who spends a lot of time applying a futures thinking lens to a lot of other sustainability topics, thought we'd start with: What is futures thinking and why are futures thinking techniques particularly useful to this discussion around the responsible use of AI?
Jacob Park:
Sure. So first let me just clarify what we mean by futures thinking. A lot of people when they hear the term their mind goes to making predictions. It's really not about that, there is no crystal ball. Futures thinking is a structured way of thinking about the future that allows us to anticipate, and prepare, and hopefully shape the future for the better. And we call it futures thinking, plural, to emphasize that there's always a multiplicity of possible futures.
There is no single preordained future to be discerned, but rather always an infinite number of possibilities. And just the very act of thinking through them can help us try to be more intentional in shaping them. Futures thinking is particularly useful when the world is changing in fast and unpredictable ways as it is now. Without actively deliberating about what the future might hold there's a high risk that businesses will be blindsided by developments rather than preparing for them adequately, or better yet, shaping them.
So as we're developing strategies and plans, it's absolutely necessary that we undertake this sort of structured thinking about the future to make sure that we're accounting for the wide range of possibilities that might be out there and trying to steer things in the best possible direction. AI in particular I would say, requires foresight more than many other topics. First of all, just the pace of technological change within the AI domain is extremely rapid and arguably may be accelerating.
Jacob Park:
Second, the nature of these evolving capabilities is highly uncertain. Right now we're seeing so-called emergent behaviors from AI that even AI developers can't fully account for. AIs are developing the ability to do certain tasks that surprise even the developers. So there's a lot of uncertainty about how quickly AI may develop and what it will be able to do. And then finally, there's a lot of uncertainty around the broad societal impact of AI and the societal response.
How is AI going to change jobs? What might AI regulation look like in the future? What are the business models going to be like for AI implementation? There's a lot of uncertainty around all of those, and so it's very important now that we start thinking through these different possibilities so that we can prepare for them and shape them.
David Stearns:
Thank you for that. So when we are engaging in this type of a conversation with companies or doing specific scenario planning around these plausible futures, can you explain a little bit about how that type of scenario planning exercise works? What's involved? What are the steps that you go through with companies, and maybe specifically who within a company ideally should be involved in those types of exercises?
Jacob Park:
Yeah. So scenario planning is but one futures thinking technique, but it is an especially relevant one here so I will focus on that. First and foremost, I would say that scenario planning is a mindset. If you wake up in the morning and the weather forecast says there's a 50% chance of rain, many people will bring an umbrella to work just in case. There's another set of people who may call themselves optimists and they might hope for the best unless maybe it's a 90% chance of rain.
And then there are people like me who even if there's a 10% chance of rain, I'll probably bring that umbrella, or at least I'll wear water-resistant shoes and check the forecast again just in case. So there is this mindset of trying to anticipate what the future might hold and prepare for it. Scenario planning can cover extremely micro topics like that out to very macro topics like what is the future of global climate policy, or the future of China-US relations, or the future of AI?
It's been being used for decades in the business world, particularly among boards, C-suites, and strategy teams to try to inform thinking about things like long-term capital investments with uncertain returns amidst volatile conditions. It was really pioneered by some of the big oil companies starting in the 1960s and even a little bit earlier post-war to think about developing major infrastructure in uncertain geographies. There are many different approaches to doing scenarios. Some of them are more formal than others but the basic steps are the same.
So the first thing is really just understanding what the focal issue is of your scenarios. You want to think about the future of X, but what is X? Is X, what is the future of AI capabilities? Is it what is the future of AI's impact on our workforce? Is it what is the future of how AI will be received by consumers? It's really critical first to just understand what it is that you're trying to investigate here, and that can be tricky actually. The next part is about identifying what we call the critical uncertainties.
These are the questions that are incredibly important for your focal issue, but also the most uncertain or most unpredictable. In some cases this is easy. If we're talking about the near-term future of US politics, well, your critical uncertainties have to do with the upcoming elections. In other cases it takes more work to uncover what are the key drivers of this question that we're trying to get at and what are the real uncertainties around these drivers? Are the uncertainties having to do with chip manufacturing or are they having to do with regulatory conversations in D.C. and Brussels?
Once you've identified those critical uncertainties, then you go through the process of developing the scenarios, which is always an iterative process of thinking through what might plausibly happen based on these critical uncertainties, engaging various kinds of stakeholders in evolving those to make sure that they're both plausible but also stretching and challenging and make sure that you don't have blind spots that you're not considering.
And then finally, where the rubber hits the road is really thinking through the implications of the whole set of scenarios. So you don't want to focus on just one, but really taken as a whole what are the implications of each of these scenarios for you, for your organization? What sorts of no-regrets actions might there be that would make sense to do no matter what scenario describes the future that happens? What sorts of hedging strategies can you undertake in cases of uncertainty? And what are the early indicators that you might watch to see which direction things are starting to unfold?
So as I said, scenario planning has been undertaken by those various senior executive functions within business for some time. It has been starting to get adopted by sustainability teams as well. The TCFD recommendation that businesses do climate scenario planning has seen a lot of sustainability teams get involved in scenario thinking for the first time, and I think that's a great thing. Any team can use scenarios, and particularly if they're involved in strategic conversations it makes a lot of sense to do so.
In terms of who to involve there's no single answer here. It depends on what is the impact of AI on your business in terms of what functions you need to be thinking about it. But one general note is that you don't want to leave this to just one team. You want to have diverse perspectives in the room representing a range of functions, including some unusual suspects who may have ideas that run counter to the orthodoxy that often prevails at the top. I'll leave it there for now, but hopefully that's helpful.
David Stearns:
Thanks for that, Jacob. That was a really great introduction and primer to what scenario planning is, and how it could be useful for companies, and how it works, so we really appreciate that. I want to pivot slightly to get into a little bit of a deeper dive on a specific impact of AI that's been identified, and you made reference to it in your remarks just now.
When it comes to uncovering the broader societal impacts that could potentially reshape the operating context for business, you mentioned the displacement of workers and unemployment. And I was hoping you could talk a little bit more about how a scenario-planning exercise, the type that you just described, might be applied to that challenge to help companies better understand the risk of worker displacement and to potentially mitigate some of those negative impacts which have been identified around the deployment of AI.
Jacob Park:
Yeah. So first let me just say that generally there are a host of potential risks related to AI that companies need to be thinking through including but going beyond impacts on workforces. And this depends a lot on what sort of company you are. Are you in the business of developing AI? Are you integrating it into a product or service? Are you using it to optimize an internal workflow or process? Depending on the answer to that question, the focus of your inquiry is going to be different. The risks are of course going to be different.
For some companies the key question will be whether AI can become accurate enough that it can be used to perform critical functions in highly regulated industries like health and finance, which would of course also raise questions about the regulatory environment. In other companies, it might be about how customers respond to the automation of certain services. The question of unemployment is a really important one and an interesting one.
Historically, the projections of how AI might impact jobs have really been all over the map with some forecasters thinking that there would be a huge loss in jobs and others anticipating a net gain in jobs. You know, a few short years ago when deep learning was starting to get really good at things like image recognition, chess, Go, driverless cars, the prevailing wisdom was that the jobs that were most threatened were routine jobs, what some people called dirty, dangerous, and dull jobs.
And it was taken as commonplace wisdom then that truck drivers were on the front lines of those who might see their jobs automated away while white-collar workers and creative types were seen as safe from AI for a long time. Now with the new approach to AI through LLMs and generative AI, that has flipped on its head. In both cases I think there's a last mile where we as humanity are hesitant to turn things over to AI, and for a good reason. I mean, we've gotten 90% there on driverless cars, but not all the way.
Similarly, AI can do a lot of things with medical applications, but we're hesitant to turn over the reins 100%. And again, for good reasons. Like very significant safety concerns in both cases. The point is, there's a huge amount of uncertainty about how many jobs will get automated away, might get automated away. One thing though that people have agreed on for a long time is that while some jobs may disappear many more jobs will just change and new roles will also be created.
And so I think there's actually, right now, a really important opportunity for us to think through what the jobs of the future might look like and how do we ensure that those are good, fair, safe, equitable jobs. When I think about how AI, for instance, is impacting a grocery store. Right now, at the same time, we're seeing checkout cashiers as we've had for decades. We see some people who are monitoring the self-checkout machines. And then we've even got the case of some groceries that are supposedly completely automated using cameras and sensors.
Although recent reporting has suggested that even there behind the scenes there are some overseas workers who are monitoring what's happening in those cameras. In each case we have humans performing different roles to accomplish the same task. Which role is best for our workers and for society? That's going to happen to a lot of other sorts of work.
We're going to see new models for getting the same thing done, and I think this would be a really good time to proactively try to understand what those roles are going to look like and use techniques like scenario planning to really anticipate what the impacts will be on questions like fairness and equity and get ahead of that while we still can.
So I think it's useful to do forecasts about job losses or job gains, but that's just been all over the map. And instead of spending too much time trying to make our best guess at that I think we can already get ahead of trying to make sure that the jobs that are created and the jobs that are changed, and those may be the majority of jobs, are good ones.
David Stearns:
Thanks for that, Jacob. And this is a great segue actually into the next question because we have been talking a bit about the risks and potential negative impacts of AI. We just now were talking about the impact potential on workers. Some of it could be job loss. Like you said, some of it could be new jobs, different types of jobs. Some of our other conversations with our colleagues, we've been talking about other potential impacts around the environment, and human rights, and disinformation, and bias.
But let's pivot there now and talk a little bit about some of the potential positive opportunities that might be offered by AI, particularly in the context of our work to advance sustainable business. Can you talk a little bit about sort of what you see as some of the real innovations that AI might offer to us that could actually advance just and sustainable business and the work that we do at BSR?
Jacob Park:
Sure. So one innovation is recent that I'm excited about is MethaneSAT, which is a new satellite that was launched into space by EDF quite recently that is using AI to really pinpoint the exact sources of methane emissions around the planet which will allow regulators and policymakers to understand exactly where the most serious emissions are coming from and craft policies that can target those very specifically.
That's initially going to be deployed looking primarily at the oil and gas sector, although MethaneSAT hopes to next turn its lens on the agricultural sector. And I think there's a huge opportunity there to really make much better, more targeted climate policy in the near-term, and that's going to be very valuable.
David Stearns:
Are you suggesting that there's going to be AI satellites circling the Earth trying to monitor cow farts?
Jacob Park:
That's exactly what I'm suggesting, in aggregate. Satellite imagery powered by AI will also allow us to hone in on many other aspects of the supply chain that would benefit from greater transparency and traceability. So there is huge near-term opportunity there. Material science innovations I think are going to be longer term in coming, but potentially really important. We need to vastly scale up battery storage to drive the energy transition that relies on minerals that are problematic to procure for human rights and geostrategic reasons.
And to the extent that we can come up with new formulations for things like batteries in electric vehicles, that could really be a game changer for the energy transition. There is certainly huge potential in basic scientific research and in health care if we're talking about drug discovery or personalized medicine. These things, again, I think somewhat longer term. There's no way to shortcut through human clinical trials right now, so this isn't something that's going to be overnight. But I think longer term, massive, massive opportunity.
I'm excited about the possibilities for education, for making it personalized and incredibly scalable globally even though I think that there are also risks, and that has to be undertaken very carefully. How we shape the interactions between kids and AI is something that needs to be approached very carefully. But even just for skilling and training adults, there's huge opportunity there. A lot of operational efficiencies can be gained and are already happening. Things like energy deployment and energy storage is being optimized very helpfully by AI.
So those are some of the specific ways where I see near-term and medium-term opportunity that's very significant. More generally, I would say that our biggest challenges are really wicked problems right now. You know, complex systemic issues that are incredibly difficult for humans to understand let alone influence. This is a place where I think AI really can make a huge contribution.
We need to radically reconfigure our energy, food, and transport systems, among other things. And again, just understanding how all the parts work together is daunting for any human or even groups of humans. And so I can really imagine the possibility that AI as a companion to policymakers, academics, businesses, trying to address complex systemic problems could really have a huge important role to play there.
David Stearns:
That's quite fascinating. And I think it's great to hear about some of the potential positive applications of AI because we do sort of hear mostly the Black Mirror-type stories about AI. And being able to talk a little bit about some of the ways that AI could generate some significant impact, positive impact, for society and the environment is quite heartening. We like to end our conversations typically with a question about what you see coming around the corner, which I suppose for the head of the Sustainable Futures Lab might be a little bit redundant.
But as someone who has been working in business transformation as an expert on sustainability, who has been in this field for many years, we'd still like to ask for a rapidly growing or rapidly deployed technology like AI, which seems like we're learning new things about it every day, what other future challenges do you think might be most pressing and how can futures thinking help to address them? I mean, are there other things that are on the horizon from your own planning exercises that you've already conducted or other scenarios or futures that you have been researching? Anything stand out?
Jacob Park:
Yeah. As we've been talking about, AI is developing extremely quickly and in sometimes unpredictable ways, and it's going to have broad impacts on how we work, and live, and communicate, and even think. We've automated certain tasks in ways that have just on balance felt like they've been helpful to us, right? So you could think about the calculator or Google Maps, right? It's okay if many people today don't really know how to read a map. But what if we start to rely heavily on AI to help us think and to help us communicate in a basic way?
I do worry about the dependency and the de-skilling that might result. How are young people going to learn how to do these basic things like critical thinking if AI is starting to encroach on those sorts of activities? And that's something where really no single actor has responsibility for thinking about questions like that. Meanwhile, the financial pressures, the geostrategic pressures mean that AI development is not going to pause, and it's probably not going to slow down. Regulation is almost certainly going to lag behind. And so we really need to use foresight at this moment to consider those sorts of questions.
And foresight undertaken by individual companies, yes, but also by civil society groups, and by broader industry coalitions to try to get at these potential impacts that really aren't owned by any single actor but that as a society we have a collective interest in understanding and shaping for the better. David, I think you know I met my wife seven years ago at a picnic in Prospect Park. And funnily enough, our very first conversation was about AI safety and long-term risk, which tells you a little something about what kind of people we are.
At the time it was a truly fringe topic and felt pretty esoteric. Today this is headline news every day. And what does give me hope is that so many people are talking about this and working on this right now that I think people have really grasped what's at stake and now we are moving away from this moment of just having to sort of make clear that, "Oh, this is going to be really big and important," to rolling up our sleeves and trying to understand, "Okay. What can we do here?"
I think there's a lot that we can do, and I'm encouraged and hopeful by these sorts of conversations that people are starting to come together and really figure out how to wrap our heads around this and make sure that AI can be used to create a better, a more just, and more safe sustainable future.
David Stearns:
Well, I think it's safe to say that we would all feel probably a little bit more comfortable if we had minds like yours at the center of the conversation, Jacob. Because I think the thoughtfulness that you bring to this is really welcomed. So it's a pleasure again, as always, to chat with you. We really appreciate your time and look forward to connecting with you again. Thanks. Thanks for joining us.
Jacob Park:
My pleasure, David. Thank you.
David Stearns:
Thanks for listening. For more in-depth insights and guidance from BSR, please check out our website at bsr.org, and be sure to follow us on LinkedIn.
Topics
Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.