Lessons from Adaptive Management: Obstacles and Outcomes
Fig. 3.1
Conceptual model of adaptive environmental assessment and management, indicating the integration of processes that assess, propose, test and evaluate hypotheses of ecosystem dynamics and policy implementation (Holling 1981)
Fig. 3.2
U.S. Department of Interior diagram showing steps in the adaptive management process (Williams et al. 2009)
Interestingly, there is no distinct separate step in either model for the extensive traditional science inquiry process prior to implementation of an action. That is, neither of the conceptual models depicts science as a process that is distinct (as an identifiable item or bubble) and unto itself. Elements of scientific processes are ever present, i.e., hypothesis formulation, simulation experiments, monitoring , modeling, etc. However, they are cast into the process of adaptive management policy experimentation . This is why many authors state that adaptive management blurs the distinction between science and management (Holling 1978, Walters 1986, Lee 1993, Gunderson and Pritchard 2002, Williams 2009). This relates to the principle in an adaptive management framework that there is no clear separation of science and management activities, and indeed they are both part of a more holistic model of management. Both diagrams emphasize the role of monitoring as a critical step in adaptive management. In the adaptive assessment phase of the process, one of the key outcomes is identification of critical ecosystem variables to monitor. Monitoring should evaluate the outcomes of management interventions, and as such is a critical part of adaptive management (Walters 1997). While monitoring is done for many reasons, it is in the context of adaptive management that monitoring helps to build understanding and provides the basis for learning.
Adaptive Assessment and Creative Syntheses
A critical, but often overlooked part of adaptive management is the environmental assessment process. This is indicated in Fig. 3.1 as the oval in which hypotheses are generated, and in Fig. 3.2 as the problem assessment and design modules. One of the important differences between adaptive assessments and other assessment approaches is how ecosystem understanding is integrated (or not). Scientific or ecosystem based assessments, are often based on piecemeal or disciplinary analysis of resource dynamics. Holling (1998) describes this problem as two different modes of science. He argues that one mode of science focuses on parts of the system and deals with analyses and experiments that narrow uncertainty to the point of acceptance by peers; it is conservative and unambiguous by being incomplete and fragmentary. The other view is integrative and holistic, searching for simple structures and relationships that explain much of ecological complexity (Holling 1994). This view provides the underpinnings for the adaptive approach, because surprises are inevitable and knowledge will always be incomplete.
One of the novel innovations in adaptive management was the use of computer models to structure the discourse in a series of workshops (Holling and Chambers 1973) . There have been hundreds of environmental assessments in settings around the world (Walters 1997) which have used computer models to articulate what is known and not known, highlight competing claims about understanding ecosystem dynamics, and evaluate these alternatives. The construction of a computer model in a series of workshops has been a hallmark of adaptive management. First described by Holling and Chambers (1973) the workshops were structured to create an atmosphere where interdisciplinary gaps (among various ‘ologies’ or sciences) could be bridged. One design element of the workshops was decidedly “open”, that is, a style in which the participants and the rules are allowed to co-evolve. Another design element of the workshops involved acknowledgment of failure. Since the territory was so new, the likelihood for failure was high, so the approach had to be robust or safe to fail. Part of this safe to fail design was that actions of the workshops were called a game and had three components; people, rules and tools. The use of computers as a communication device remains a staple of adaptive management today- four decades later. The computer models range from the simple to the sophisticated, but the key precept is that the models are developed as a translator among various perspectives and disciplines, and less for a predictive, deductive engine for forecasting impacts of proposed management actions. The computer displays information visually, which allows for people to instantly react and consume large amounts of information. The computers are used as “gaming devices”, a safe environment in which the complexity of resource issues can be explored, and ideas tried, with no consequences other than learning (Holling and Chambers 1973).
The search for simplification is manifest in both the theory and practice of adaptive management. As mentioned above, computer models are built to help integrate and organize collective understanding of complex issues. The approach in constructing these models is to be parsimonious in the selection of variables and interactions. That is, only include enough complexity in the model to capture essential dynamics of the ecosystems. Otherwise, the model becomes as complicated as the ‘real’ world that is being assessed or managed, and as intractable (Clark et al. 1979, Walters, 1986).
A key step in the assessment process is to determine the credibility of models. The computer models are viewed as hypotheses, and as such cannot be validated, only invalidated in the Popperian view of science. The models are caricatures of reality, only including what is essential. Therefore what is important is model credibility, not validity. It is only after resisting attempts at invalidation that a model becomes credible. One way of attempting invalidation is to compare the model output with historical data (verified data, not interpreted). Another is that correlation between the model and historical data does not imply causation. Other means of invalidation include trial and error approaches that compare model predictions with what happens in the real world, natural trials where model output can be compared to natural experiments, and comparing the behavior of alternative models. Once the models (or sets of models) have resisted invalidation, they can be used to evaluate alternative polices.
Many cases around the world have undergone the assessment phase of adaptive management, but only a subset has moved through this phase to the management phase (Walters 1997, Gunderson et al. 2008). Among the reasons for this include the inability to discern among competing hypotheses. That is, rather than develop policies or management tests based on a single hypothesis, multiple hypotheses can lead to dramatically different actions. One such case arose in the assessment of the Florida Everglades.
A major unresolved environmental issue of the Everglades has been the decline of wading bird nesting (Davis and Ogden 1994). Among the explanations include a loss of early season habitat due to land use changes; a loss of food production due to wetland conversion to agriculture and development; changes in seasonal hydrology which affects food supply through nesting, increased predators; decreased water flow to estuarine habitats; changes in behavior due to heavy metals and other pollution; and an increase in feeding opportunities outside the Everglades ecosystem (distant magnet) among others (Davis and Ogden 1994). In the assessment process, it was clear that many of these hypotheses centered on hydrologic modifications of the ecosystem (Walters et al. 1992). Indeed, the ongoing Everglades restoration plan is based on an assumption that hydrologic changes are at the heart of wading bird nesting decline. If the alternative hypotheses of behavioral change due to pollution or distant magnets are valid, then the multi-billion dollar recovery plan that calls for hydrologic manipulation will not meet wading bird recovery goals.
While the Everglades adaptive assessment process was key to developing current restoration plans, the system has yet to move into active adaptive management (Gunderson and Light 2006). Indeed, it seems stuck in ongoing modeling and analysis to attempt to determine policy outcomes prior to any management action. In other words, the assessment process is key to designing policy actions that can be tested over time. Models are useful in the policy design phases, but should not be used to predict outcomes (Walters 1986) . It is only through testing actions in an adaptive framework can system understanding be gained, not through extended modeling and monitoring (Williams 2009).
Adaptive Management: Learning Through Doing
The essence of adaptive management is the development of actions that are designed as much for learning as to meet other social objectives. The design of adaptive experiments or treatments is one of the outcomes of adaptive assessments (Walters 1986). The implementation of those experimental or treatment designs has been problematic, and can be stymied because of a number of reasons. Among the reasons include inability to control key variables at appropriate scales, unwillingness to risk the results of outcomes, costs of experiments and inability to monitor key resource responses, and lack of leadership (Walters 1997, Gunderson 1999).