Conservation by Design is currently under construction. Please check back next year.

Key Advance 4: Evidence Based Conservation

CbD 2.0 emphasizes the generation, collection, synthesis, sharing and leveraging of evidence. We’ve increased this emphasis so much so that this aspect of our work is called out explicitly in three of the five phases (i.e., identify challenges and goals; map strategies and places; adapt). The ability to make robust decisions about investing limited conservation funds requires understanding – and bolstering where needed – the strength of the evidence underpinning a given theory of change.

Note that the evidence base on its own will not sufficiently disseminate new knowledge about how to accomplish these strategies; we must also commit to proactively sharing what we learn. Conversely, sharing knowledge without a commitment to increasing the evidence is a lost opportunity, and is also insufficient on its own. Here we advocate for evidence coupled with knowledge sharing, as it is this combination of skills and commitments that is needed to truly advance conservation. For this reason, we distinguish the term “knowledge-sharing” from “evidence base”, defined above.

Key Concepts for Building the Evidence Base

Evidence relevant to conservation comes from a wide variety of disciplines and sources

Conservation scientists are typically more familiar with evidence from the fields of conservation biology, ecology, evolution, and spatial planning, and may assume ‘good’ evidence is found only in peer-reviewed publications. However, conservation now must draw from evidence produced by many more diverse fields including health, poverty alleviation, education, demography, psychology, economics, anthropology, and sociology, among others. In addition, much conservation knowledge exists as traditional knowledge held by local communities, stakeholder groups (e.g. farming coalitions, business roundtables, extension networks) or indigenous peoples. Teams should place equal effort on identifying relevant evidence across disciplines, sectors and knowledge sources, as appropriate.

Evidence base assessments should not perpetuate confirmation bias

The conservation community is prone to elevate successes and less frequently shares information about failures. Teams conducting evidence base assessments should aim to identify all relevant evidence relating to an assumption; including evidence that supports it and evidence that refutes it. Given the potential for hidden bias for affirmative results, we encourage conservation teams to intentionally seek out evidence that may counter leading assumptions of how strategies will create positive impacts for nature and people. This evidence may otherwise be overlooked, leading to false and misleading interpretations of existing knowledge.

An assessment of the evidence base creates transparency

It is critical to differentiate between proposed actions that are strongly supported by evidence, those where the knowledge base shows conflicting information (some showing the action’s effectiveness and some showing failure), and those where no evidence exists. This transparency of status will allow conservation teams and managers to make informed investments, define monitoring needs, and make risk-related management decisions.

The emphasis on evidence should not limit innovation and creativity

Innovation is a key element of conservation success, and requires generation and testing of novel ideas. By definition, such new ideas will not have a full body of evidence supporting their effectiveness for conservation. Those novel strategies may nonetheless be worth investing in if the potential reward is great enough. A commitment to evidence must not stifle innovation. However, for projects with a limited evidence base, it is especially important to invest in building the evidence base through well designed and sufficiently funded research and monitoring.

Evidence must meet minimum standards to be considered evidence

To be considered evidence, the assumptions made by the conservation team must have been measured or observed. Opinions that something should work, even when contained in a conceptual peer-reviewed paper, do not count. For example, if sustainability standards are expected to change corporate behavior, conservation teams would look for evidence that sustainability standards had been shown to create change in corporate behavior in real world cases. Papers identifying a conceptual pathway through which such change could happen do not provide such evidence. Reports or agreements where corporate leaders have pledged to change behavior do not constitute evidence. Papers or reports that show adoption of different practices as a result of sustainability standards do constitute evidence for this assumption.

Not all evidence is created equal

Evidence is strong when we have confidence that additional data will not reverse our conclusions. This is generally the case where there are consistent findings across multiple studies (e.g., meta-analysis) or where effects are far too large to be attributed to chance alone. Studies where rigorous experimental designs are used (including before-after comparisons as well as an appropriate control group) also generate confidence. Although there is not yet consensus among conservation practitioners around a standardized approach to evidence grading at this time, expert judgment should consider these factors when assessing strength of evidence. In this version of the Guidance, a basic method for characterizing evidence quantity using this minimum standard is presented (e.g., strength of evidence for results chains). Future Guidance versions will include methods for assessing evidence quality, an equally important element of evidence assessment.

The required quality of evidence will vary from case to case

The strength of evidence needed to provide confidence in a decision varies from decision to decision. Strong evidence is not equally important in all cases. For example, decisions that present high financial or reputational risks, or risks to vulnerable stakeholders, should be held to a higher evidence standard than those with relatively low risk. The way in which evidence is intended to be used in a strategy also will determine the strength and type of evidence required. Thus, a key consideration for conservation teams is whether the available evidence is “sufficient” for the assumption, decision or strategy it supports. Factors that determine sufficiency include organizational risk tolerance and the information requirements and risk tolerance of stakeholders.

A well-developed evidence base can reduce and focus monitoring needs and minimize costs

Given limited conservation resources, teams should focus research and monitoring efforts on high priority information gaps, with priority being determined by such things as the stakes of being wrong, and the information needed to influence key actors (see Monitoring section). As the evidence base for conservation builds, we will be able to continually shift investments towards filling key evidence gaps and away from measuring well-documented outcomes.

Tips for Introducing Evidence Based Conservation

Design a project to gen­er­ate evi­dence. Teams can accel­er­ate the devel­op­ment of a Con­ser­va­tion Evi­dence Base by think­ing about their con­ser­va­tion engage­ment as a “hypoth­e­sis”, and build­ing into it ele­ments of good exper­i­men­tal design, such as: a clear under­stand­ing of the assump­tions being made in the the­o­ry of change; a hypoth­e­sis of the change we […]

External Resources
The Bridge Collaborative

The Bridge Col­lab­o­ra­tive is dri­ving a fun­da­men­tal shift in how we think, plan, fund and work across sec­tors to make big­ger change faster. We unite peo­ple and orga­ni­za­tions in health, devel­op­ment and the envi­ron­ment with the evi­dence and tools to tack­le the world’s most press­ing challenges–from cli­mate change and bio­di­ver­si­ty loss, to pover­ty and mal­nu­tri­tion, to air and water pol­lu­tion. Because there’s only one way to solve the most crit­i­cal prob­lems we face: Together.

Case Studies
Comparing the cost effectiveness of nature-based and coastal adaptation: A case study from the Gulf Coast of the United States

Published in April, 2018, by a team at UC Santa Cruz, ETH ZURICK  and The Nature Conservancy in PLOS ONE, the study quantified the flood risks to people and property for the entire U.S. coast of the Gulf of Mexico under current and future climate scenarios and economic growth projections, and compared the cost effectiveness of nature-based and artificial solutions for flood reduction across the Gulf of Mexico.

How do pesticide taxes and habitat subsidies compare for health risks, income and the environment?

This case study, found in the Bridge Collaborative Practitioners Guide on page 14, demonstrates how using evidence from multiple sectors can provide a common basis from which to compare approaches across multiple types of impact.

Submit a Comment

Your email address will not be published. Required fields are marked *