Searching for:
Search results: 231 of 1137
Insights+ | Thursday January 12, 2023
Amidst ESG Backlash, Companies Should (and Are) Staying the Course on Sustainability
Amidst ESG Backlash, Companies Should (and Are) Staying the Course on Sustainability
Insights+ | Thursday January 12, 2023
Amidst ESG Backlash, Companies Should (and Are) Staying the Course on Sustainability
Blog | Tuesday December 20, 2022
Neurorights: A Business Response
As technology gets closer to influencing our behavior, what role does business have in protecting and respecting users?
Blog | Tuesday December 20, 2022
Neurorights: A Business Response
The emerging field of “neurorights” questions how neurotechnology could impact the human rights of people to freedom of thought, identity, privacy, and free will. Undoubtedly, technology is getting closer to influencing our behavior. Following our initial piece exploring how this technology could impact our human rights, today we focus on the responsibility and role of business—from tech companies to tech-enabled products and services—to protect and respect users, both now and in the future.
Evolving Rights Frameworks
Fundamentally, human rights principles and frameworks need to evolve to respond to the emerging implications of neurotechnology (neurotech). Already, there are efforts at both the national and international levels to enshrine neurorights in laws and treaties—indicating that a shift in expectations and regulations for business related to the influence and impact of technology is on the horizon.
For instance, in Chile, a draft constitution proposed the world’s first legislation on neurorights, including a “neuro-protection” bill to regulate technologies that are capable of “interfering” with brains, including mind control and mind reading, and prohibit the buying or selling of neural data, declaring it the equivalent of trading human organs.
While the constitutional changes were rejected in a referendum, they show the issue making headway into public policy. The EU is developing an artificial intelligence (AI) act to protect human rights and define high-risk uses, which will introduce requirements to address the impacts of AI on society, and the UN has indicated that neurotech is one of the frontier issues for human rights. The Inter-American Juridical Committee, which promotes the codification of international law across the US, has approved a neurorights declaration on neuroscience. Meanwhile, new legislation on human rights taking shape across the EU and at a national level (see Japan, New Zealand) could also plausibly expand to encompass neurorights, impacting future human rights due diligence efforts and assessments.
There’s also the European Commission’s Digital Services Act, which creates a common set of rules for the transparency of recommender systems across the EU, with implications for the "right to free will." How can businesses prepare to meet more regulation and scrutiny on the extent and impact of their services? And how might regulators distinguish between technologies that influence our brains and those that could one day control them? For instance, could future neurotech users outsource some daily decisions to a trusted service, for instance, to limit their appetite or override their impulse to smoke?
Ahead of these regulatory frameworks, what actions should business take to meet their responsibility to respect human rights, including both technology companies and those that use tech?
Active Engagement
A first step is undertaking human rights due diligence of technology that might implicate neurorights to identify potential impacts across the full range of internationally recognized human rights and establish strategies, alone and with others, to address those risks as part of product development. Business can actively engage in this conversation and work to establish guardrails before adverse impacts related to neurotech escalate.
Stephanie Herrmann, a human rights attorney and co-author of the NeuroRights Foundation report, points out that some of the concepts underlying neurorights can be difficult to define and therefore protect, such as identity. Therefore, international human rights law must evolve alongside the technology to provide more explicit protection.
Businesses can get ahead by asking how the applications and impacts of neurotech might evolve over the next decade and by identifying appropriate actions to address new risks. Additionally, business leaders should play a role in adapting and creating normative frameworks to help shape the field and positively contribute to human rights protections of their users today and in the future.
Questions for Business
- How might technologies in product road maps impact existing rights to privacy, freedom of expression, thought, and opinion? How might they impact the proposed new rights of mental privacy, personal identity, free will, fair access to mental augmentation, and protection from bias?
- Who will be most vulnerable to adverse impacts from neurotech? Which communities, stakeholders, and experts should we be engaging with? How might these impacts vary across geographies and contexts?
- How might our technology be misused, and what harms may arise from this misuse? What leverage do we and others have to avoid, prevent, and mitigate these harms?
- How should the adverse impacts of neurotech be remedied, and by whom?
- What policy, legal, and regulatory framework is most appropriate for neurotech, and how can we help shape it?
Blog | Tuesday December 20, 2022
COP15: A Historic Deal to Halt Biodiversity Loss by 2030
What are the implications of COP15’s Global Biodiversity Framework, and how can business move forward on new initiatives to protect nature?
Blog | Tuesday December 20, 2022
COP15: A Historic Deal to Halt Biodiversity Loss by 2030
Monday, December 19 was a historic day for nature and biodiversity, marking the adoption of a Global Biodiversity Framework (GBF) at the UN Convention on Biological Diversity in Montréal (COP15). This global agreement—which many see as an equivalent of the Paris agreement for climate—commits the world to halting and reversing biodiversity loss by 2030.
After two weeks of immense engagement from the world’s largest business and financial institutions, NGOs, governments, and indigenous communities—and following four years of negotiations and COVID-related delays—196 countries adopted the Kunming-Montréal Global Biodiversity Framework agreement. Notably, the United States did not sign the agreement, the only country besides the Vatican not to ratify the framework.
Notable Aspects of the Global Biodiversity Framework
- The "30 by 30" Target. Target 3 commits to conserving 30% of “terrestrial, inland water, and of coastal and marine areas” by 2030. Additionally, Target 2 calls for restoring at least 30% of degraded land and waterways. Governments are also committing to protect areas of high biodiversity importance and with critical ecosystems in the remaining 70% of land and waterways on Earth.
- Human rights-based approach. Indigenous communities are mentioned explicitly in four of the targets—including Target 3 (the ’30 x 30’ Target). The GBF acknowledges that indigenous-led conservation models must become a norm this decade, and mandates the rights of indigenous communities to their traditional territories when achieving Target 3.
- Nature disclosure for business. The role of the corporate sector in achieving the GBF’s policy goals was outlined in several targets. Most notably, Target 15 agrees to “legal, administrative or policy measures to encourage and enable business” to “regularly monitor, assess, and transparently disclose their risks, dependencies and impacts on biodiversity […] along their operations, supply and value chains and portfolios.” This also includes compliance in relation to access and benefit sharing.
- Reform of environmentally harmful subsidies. Target 18 requires governments to “Identify by 2025, and eliminate, phase out or reform incentives, including subsidies harmful for biodiversity, […] while […] progressively reducing them by at least $500bn per year by 2030, starting with the most harmful incentives, and scale up positive incentives for the conservation and sustainable use of biodiversity.” Currently more than $1.8tn in annual subsidies go to industries connected to biodiversity loss.
- Biodiversity and nature financing. The framework estimates a biodiversity finance gap of 700 billion dollars (USD). Target 19 outlines financial expectations including “by 2030, mobilizing at least 200 billion United States Dollars per year”. This includes “leveraging private finance”, innovation in nature related financial market schemes, and community led non-market-based approaches to conservation–all of which will require private sector engagement.
The first quarter of 2023 will be critical as the corporate community harnesses the momentum of the event, considers the implications of the GBF and moves forward on new and existing initiatives to protect nature and halt biodiversity loss.
In the meantime, as the BSR team considers COP15’s implications and how it will influence our work with business on nature, four key themes emerge:
- Nature disclosures as the new norm. One of the breakthrough points of the agreement calls for large and transnational businesses to disclose their impacts and dependencies on nature and biodiversity. Leading up to COP15, the initiative Business for Nature led a groundbreaking campaign calling for mandatory nature disclosure that garnered more than 380 business signatories across sectors from more than 55 countries. It’s clear that in both the voluntary and mandatory space, nature disclosure is the new norm and expectation, sending a clear signal to business that they should begin mapping, analyzing, and reporting on their nature-related impacts and dependencies. Frameworks and initiatives such as Science Based Targets Network and Taskforce for Nature-Related Financial Disclosures will be key levers for business to employ in order to meet their obligations in relation to disclosure as outlined in the GBF.
- Indigenous peoples and local communities (IPLCs) are an essential part of protecting and restoring nature. The rights and contributions of IPLC’s are respected and codified throughout the framework. Complimented by the focus on loss and damage and just transition agreed upon at the UNFCCC 'Climate' COP27, it’s clear that business can integrate meaningful and continuous engagement with IPLCs when assessing impacts and implementing nature-based solutions. This includes directly integrating a human rights lens into strategies and interventions to protect nature throughout the value chain.
- Major business transformation is on the horizon. The GBF includes specific rules and provisions for priority sectors—particularly agriculture and finance. The agreement includes a call to phase out harmful agricultural subsidies, and for broad moves toward more sustainable modes of production and consumption. Businesses will need to re-evaluate and transform their business models—including transitioning to more circular systems and degrowth—topics BSR will explore in more depth in forthcoming blogs.
- Mobilization of financial flows to protect nature. Much discussion has surrounded how to realistically achieve the financing needed to deliver on the targets. Of the $200B annually that the GBF agrees to mobilize by 2030, developed countries are expected to contribute $20B by 2025 with an increase to $30B by 2030. To supplement this and to fill the immediate gap, recent initiatives such as Nature Action 100 have been launched with the intent of accelerating action by financial institutions to enact pressure through their investments.
As we move into 2023, BSR is looking forward to driving corporate action on nature through 1:1 and collaborative engagements across sectors and geographies. We will be focused on helping our members better understand their business’ relationship with nature, prepare for and meet evolving voluntary and mandatory expectations, and advance progress at the intersections of nature, human rights, and climate change.
People
Charlene Collison
As Director, Collaborations, Charlene develops and implements strategies to engage BSR member companies and other stakeholders in high-impact collaborations for sustainable development. She specializes in building collaborative approaches between consumer-facing companies and their value chains to embed long-term sustainability, build resilience in the face of disruption, and increase equity, inclusion,…
People
Charlene Collison
As Director, Collaborations, Charlene develops and implements strategies to engage BSR member companies and other stakeholders in high-impact collaborations for sustainable development. She specializes in building collaborative approaches between consumer-facing companies and their value chains to embed long-term sustainability, build resilience in the face of disruption, and increase equity, inclusion, and justice.
Charlene brings over 20 years of experience in sustainability, systems change, futures, and collaboration. Prior to BSR, Charlene was associate director for sustainable value chains and livelihoods with global sustainability non-profit Forum for the Future, where she directed collaborative system change initiatives in cotton, tea and other commodities, and led strategy projects with partners across the fashion and textile, agricultural and food sectors.
Charlene has an extensive background in futures, including working with the UK government’s futures unit developing and testing strategies across government departments.
Charlene holds an MSc in Sustainability and Responsibility from Ashridge Management College (UK), a BA in International Relations from the University of Puget Sound (US), and a Diploma in Organisation Development from Henley Business School (UK).
Blog | Wednesday December 14, 2022
How Advances in Neurotech Will Impact Human Rights
As technology gets ever closer to influencing human behavior through neurotechnology, what are the emerging human rights risks?
Blog | Wednesday December 14, 2022
How Advances in Neurotech Will Impact Human Rights
The emerging field of “neurorights” questions how neurotechnology could impact the human rights of people to freedom of thought, identity, privacy, and free will. Undoubtedly, technology is getting closer to influencing our behavior. This piece asks how this technology could impact our human rights, and it will be followed by a second part exploring the responsibility and role of business—from tech companies to tech-enabled products and services—to protect and respect users, both now and in the future.
Technology can increasingly detect and influence what we think and how we behave. An expanding field of “neurotechnology” can record or interfere with human brain activity, from physical devices, like wearables and medical implants, to artificial intelligence (AI), designed to decode thought or spoken patterns. Then, there is technology that impacts how we experience the world and feel about ourselves, from algorithms to social media apps.
While many of these technologies offer benefits to health, wellness, and human capacities, it is important to understand how they impact human freedoms. Under international human rights law, business has a responsibility for protecting and respecting human rights, such as rights to privacy, freedom of expression, thought, and opinion. However, as technology accelerates into new and unknown territory, many believe that the current international human rights framework may not be adequate to meet emerging human rights risks.
What are Neurorights?
The NeuroRights Foundation, co-founded by Rafael Yuste of the Neuro Technology Center at Columbia University and Jared Genser of Perseus Strategies, aims to address gaps in existing human rights frameworks by outlining five distinct “neurorights” that could be forsaken by the misuse or abuse of technology.
The right to mental privacy highlights the vulnerability of neural data for sale, commercial transfer, and unauthorized use.
Today, wearable devices, like headbands, monitor and stimulate brain activity to help users increase their focus or improve their sleep patterns. Brain-to-text interfaces access brain activity in a way that allows you to write simply by thinking. Medical brain implants can help patients with severe paralysis gain a level of functional independence. Recent AI models aim to decode speech from recordings of brain activity, which could help patients with traumatic brain injuries communicate again, as well as voice biomarker technology that analyzes snippets of a person’s speech to identify mental health issues, like depression and anxiety.
While these technologies could vastly improve quality of life, complete access to deeply personal neural data also raises privacy concerns for users beyond the scope of current human rights protections. One concern is the vast area of uncertainty in how neural data might be used in the future: what potential applications are users consenting to? While there are limits on what can be deciphered from today’s data, technology will get smarter at processing, decoding, and leveraging it. Entities collecting neurodata—whether that’s from wearable devices and implants, or monitoring systems for workforce safety or productivity—could face increased scrutiny on data storage and management.
The right to personal identity calls out the power of technology to impact how we perceive and express ourselves.
Social media has already had a profound impact on freedom of expression and identity. While research suggests moderate use of screens and devices can support social and emotional well-being in children, significant screen time has been shown to impact circadian rhythms, disrupting sleep and hormonal rhythms, which may be a factor in early puberty. Overuse of TV and video games may impact how we develop, disrupting motor skill development and the ability to concentrate.
The right to personal identity may be among the least protected of the neurorights under current international frameworks, where there is currently no concrete language on how identity is formed and how to protect self-perception and self-expression.
What happens to our identity in a world where technology is interacting daily with our neural activity and hormones, and responding to data from vocal and facial expressions? And how might society and regulators respond to new research documenting unintended consequences?
The right to free will recognizes that decision-making is increasingly subject to technological manipulation.
Algorithmic amplification and recommender systems—common in social media and streaming services—also have tremendous potential to impact how we access information and form opinions. There are a wide variety of studies into the impact of algorithmic amplification on news, conflict, and commerce, with an equally diverse range of conclusions—everything from increasing the prominence of high- over low-quality information to the potential impact of algorithms on elections.
For a long time, we have accepted the role of advertising in influencing our decisions, and increasingly, we accept predictive text and corrective algorithms editing in real time how we express ourselves. However, the use of technology to discern and manipulate thoughts and behavior poses a very different level of risk to human rights.
While the NeuroRights Foundation doesn't specifically address risks to freedom of thought or freedom of opinion, as these are already established human rights, it does explore the impacts of neurotechnology on these freedoms.
The right to protection from algorithmic bias points to the widespread impacts on socioeconomic outcomes of bias in algorithms and neurotechnology.
Bias is widespread in the development and application of technology. Research has shown that algorithms used by healthcare companies, to support the detection of heart disease, for instance, source from data that is not diverse, which leads to unequal outcomes or inaccurate results for patients, particularly those of color. The UK’s Department of Health recently launched an investigation to explore the impact of potential bias in medical devices on patients from different ethnic groups, including data used in algorithms and AI tools.
Without proper controls, bias can influence neurotechnology, which may directly impact the quality and outcomes of user experiences. Diverse, cross-cutting teams and research methodologies need to be in place when designing, implementing, and monitoring technologies that interact with our minds to ensure challenges in access or adverse impacts from use are identified and mitigated.
This brings us to the final right, fair access to mental augmentation, which raises the question of how far a "neurotech divide" could hinder equality and inclusion. While some new offerings aim to enable inclusion, by restoring or replicating brain functionality, access to these will not be equitable, while the application of neurotech geared towards augmenting human capacity could require scrutiny in classrooms, workplaces, and competitive arenas.
Reports | Wednesday December 14, 2022
Double Materiality for Financial Institutions
Explore our survey on types of materiality approaches currently in use, periodicity and sources of assessments, and priority ESG issues now and in the future, based on responses from 13 key financial institutions.
Reports | Wednesday December 14, 2022
Double Materiality for Financial Institutions
In the face of emerging regulatory requirements related to environmental, social, and governance (ESG) disclosure and pressure from a wide range of stakeholders, the financial services industry is now looking closer at how to best identify and address the material risks and impacts of their operations and value chains. This is a marked difference from previous ESG materiality approaches, which largely focus on financial risks to the business alone.
Explore our survey on types of materiality approaches currently in use, periodicity and sources of assessments, and priority ESG issues now and in the future.
Blog | Tuesday December 13, 2022
FIFA World Cup: Combating Modern Slavery at Mega Sporting Events
Major sporting events, like the FIFA World Cup, have been linked with human rights abuses. Here’s how business can protect human rights throughout these events.
Blog | Tuesday December 13, 2022
FIFA World Cup: Combating Modern Slavery at Mega Sporting Events
On Sunday, November 20 at the Al Bayt Stadium in Qatar, Ecuador prevailed 2-0 over the host country team at the inaugural game of the FIFA Men’s World Cup—a tournament marred by controversy over human rights impacts on migrant foreign workers, who were employed at construction sites or provided essential services.
While mega sporting events can be catalytic to promoting human rights, there are indeed serious implications for business not just before, but during and after such events if companies fail to conduct effective due diligence or put in place prevention and mitigation measures. Grave violations of human rights, such as forced labor and human trafficking, entail far-reaching business consequences amidst heightened public scrutiny.
The UN Guiding Principles on Business and Human Rights (UNGPs) provide guidance on the steps businesses should take to avoid infringing on the human rights of others and to address adverse impacts. Businesses that are involved in major sporting events must be aware of their duties and responsibilities.
The Context
Migrant workers who traveled to Qatar with a promise of well-paid jobs, including from Nepal, Bangladesh, India, Pakistan, Sri Lanka, the Philippines, and Kenya, reported widespread labor abuses. Exploitative and forced labor practices amounted to significant recruitment fees, passport confiscation, debt bondage, poor living conditions, and the tragic death of at least 6,500 workers.
Reported abuses are not confined to the construction industry. Thousands of workers in multiple sectors central to the sporting event, such as cleaning services, private security, and waste disposal, have alleged serious human and labor rights violations. Hotel staff in the accommodation and hospitality sector were subjected to slavery-like practices, such as wage retention and high recruitment fees.
Over the years, the nexus between human trafficking and major sporting events such as the US Super Bowl have emerged not only in the lead up to, but also during and after, the event. Sex trafficking, involuntary servitude, and labor exploitation can all increase because of the demand for services and, sadly, due to the influx of visitors to a host city or state. Even beyond an event, infrastructure, such as sporting facilities and hotels, will continue to require workers to maintain and run properties.
Key Business Priorities
Beyond the FIFA World Cup in Qatar, and with other major sporting events planned such as the Summer Olympic Games in France and the 2026 FIFA World Cup across Canada, Mexico, and the United States, there are several key steps business and investors can take:
- Implement a strong company anti-trafficking policy. A viable prevention strategy to address the risks of modern slavery starts with taking a leadership position in developing and adopting a forward-facing policy that addresses fair recruitment of migrant workers. Suppliers and vendors should responsibly source their products and hire staff ethically—sub-contracting practices should be limited and duly monitored.
- Train staff on how to respond. Standard Operating Procedures (SoPs), especially for hotel personnel, should be adopted. Training staff on human trafficking indicators can help identify victims and address the misuse of accommodation premises, including for sex trafficking.
- Enhance grievance mechanisms. Companies can introduce an effective reporting mechanism for workers to safely report labor abuses, exploitation, and other human rights abuses. The application of technology-based innovation can help workers report anonymously.
- Ensure remedy to those affected. As a follow-up to established abuses, companies involved can take urgent action, ensure decent and safe work for migrant workers, and provide effective remediation.
- Involve local organizations and grassroots associations. Businesses can meaningfully engage with human rights and civil society organizations on the ground before, during, and after their involvement with a sporting event to assess risks and adopt necessary mitigation strategies.
Human rights abuses related to large sporting events are sadly not infrequent. With the release of the new Global Estimates on Modern Slavery pointing at 86 percent of forced labor cases happening within the private economy, companies have not only a responsibility under the UNGPs’ framework to counter modern slavery, but also a critical role in promoting and advancing human rights at mega sporting events worldwide.
People
Jennifer Easterday
Blog | Wednesday December 7, 2022
Conflict-Sensitive Human Rights Due Diligence for Tech Companies
Explore our new Toolkit for tech companies on navigating conflict-related issues.
Blog | Wednesday December 7, 2022
Conflict-Sensitive Human Rights Due Diligence for Tech Companies
The last decade has seen an increase in state fragility and the number of violent conflicts around the world and a decrease in rule of law. Conflict-affected and high-risk markets are often characterized by serious human rights violations and harm to individuals—including loss of life, basic freedoms, or livelihoods.
Companies operating in these contexts face heightened risks of involvement with human rights harms and could exacerbate conflict and instability through hiring and procurement decisions, partnerships with local entities, compliance with local laws, or the use of their products and services. This exposes companies to potential reputational damage, interruptions in business operations, legal liability, and financial penalties
The tech industry has a particularly complex connection with conflict and instability. Emerging digital technologies have become increasingly essential and ubiquitous factors in our lives, communities, and societies. At the same time, there is increasing evidence of the industry’s role in exacerbating conflict. Moreover, the malicious use or disruption of technology to undermine international peace and security is a growing concern among states and regulators.
Conflict, fragility, and human rights are closely linked: grievances over human rights violations can destabilize and drive conflict, while violent conflict creates additional fragility and heightens human rights risks. The UN Guiding Principles on Business and Human Rights (UNGPs) call on companies to conduct heightened—or more in-depth—due diligence in conflict settings due to the proportionately higher risk of adverse human rights impacts.
What is eHRDD?
Heightened “HRDD” or “eHRDD” is, in essence, HRDD plus conflict sensitivity. It requires identifying human rights impacts as well as conflict impacts. For tech companies, conducting eHRDD in conflict-affected and high-risk areas (CAHRA) poses unique challenges and requires a rethinking of how technology can impact conflict and pose heightened risks of human rights harms.
CAHRA are “areas in a state of armed conflict or fragile post-conflict as well as areas witnessing weak or non-existent governance and security, such as failed states, and widespread and systematic violations of international law, including human rights abuses.”
They can include situations of mass violence as well as areas with weak governance or rule of law; extensive corruption or criminality; significant social, political, or economic instability; historical conflicts linked to ethnic, religious, or other identities; closure of civic space; and a record of previous violations of international human rights and humanitarian law.
Due to the vast diversity in business models, products, services, and technologies used in the tech industry—such as social media platforms, search engines, facial recognition, AI, machine learning, cloud computing, software companies, quantum computing, telecommunications, or network infrastructure—no two due diligence processes will be the same. However, there are clear phases to eHRDD and concrete steps all tech companies can take.
BSR’s Toolkit provides analytical and operational decision-making guidance for tech companies on navigating conflict-related issues. This practice-oriented guidance was written in close consultation with both the technology industry and other diverse stakeholders, including local civil society from high-risk markets.
We lay out nine steps for eHRDD, and each step has multiple components. By adapting these nine steps, we hope that tech companies can develop robust enhanced human rights due diligence processes that can help reduce the risk that technology contributes to conflict. These steps are:
- Develop a Formal eHRDD Policy and an eHRDD Process
- Build and Strengthen Cross-Functional Capacities
- Scope eHRDD Application: Triggers and Thresholds For eHRDD
- Conduct a Conflict Assessment
- Analyze Actual and Potential Impacts
- Address Impacts
- Communicate Progress
- Cross-Cutting Issue: Stakeholder Engagement
- Cross-Cutting Issue: Leverage Industry-Led and Multi-Stakeholder Collaboration
The guidance is targeted to larger multinational technology companies, but it can also be scaled down and applied by small and medium-sized technology companies or startups. We’ve also developed a short accompanying primer that summarizes the steps above and can serve as a rapid reference framework for companies as they build out these processes.
Next Steps
However, additional work remains to be done. This guidance is meant to be a starting point for further collaboration, research, and diligence. Specific areas of future focus should include a robust analysis of the impact of different types of technologies on conflict, additional guidance on how to conduct conflict-sensitivity analyses for diverse types of technology, such as artificial intelligence or machine learning, social media, telecommunications, etc., and deep dives or pilots of this methodology in diverse parts of the world.
We look forward to building on this guidance and invite tech companies to get in touch to find out how to get involved.
Reports | Wednesday December 7, 2022
Conflict-Sensitive Human Rights Due Diligence for ICT Companies
This toolkit provides analytical and operational decision-making guidance for tech companies on navigating conflict-related issues.
Reports | Wednesday December 7, 2022
Conflict-Sensitive Human Rights Due Diligence for ICT Companies
The last decade has seen increases in state fragility and the number of violent conflicts around the world and a decrease in the rule of law. Conflict-affected and high-risk markets are often characterized by serious human rights violations and severe harm to individuals—including loss of life, basic freedoms, or livelihoods.
Companies operating in these contexts face heightened risks of being involved with those human rights harms, and the tech industry has a particularly complex nexus to conflict and instability.
Our toolkit provides analytical and operational decision-making guidance for tech companies on navigating conflict-related issues. It is intended to help these companies determine:
-
What key systems and processes they need to have to detect and address human rights risks during conflict;
-
What situations and contexts should trigger heightened due diligence practices; and
-
What enhanced or heightened due diligence should entail
We’ve also developed a short Accompanying Primer that summarizes this guidance and can serve as a rapid reference framework for companies as they build out these processes.