RISK, TRUST AND MASS MEDIA

Chapter 2
Risk, Trust and Mass Media



I The concept of risk


2.1 We are all risk managers. Whether it is the risk of losing a driver’s licence because of a car accident or the risk entailed in setting up an air carrier, individuals and businesses face risks every day.


2.2 Defining risk, then, is a critical first step. Historically, the word risk related to chance outcomes, regardless of whether these outcomes were positive or negative. Over time, however, the emphasis on fortuitousness was lost, and risk became virtually synonymous with undesirable outcomes and the chance of their occurrence. Black’s Law Dictionary emphasises the negative aspects of the term by reference to “chance of injury, damage or loss”.1 Similarly, most laypersons define risk in terms of adverse consequences, a definition which also carries a negative connotation and refers to loss, damage, or injury.2


2.3 On the one hand, scientific approaches to risk emerging from fields such as statistics, mathematics, economics, insurance, and engineering understand risk as measurable in terms of probability; it is an event that can be controlled and managed.3 Risk is conceived of as the product of the probability and consequences (magnitude and severity) of an adverse event, which can be separated from subjective values and exists independently of our perceptions.4 Risks are preexisting in nature and, in principle, identifiable through scientific measurement and controllable using this knowledge.5


2.4 On the other hand, cultural approaches to risk, current in fields as diverse as cultural anthropology, philosophy, sociology, social history, and cultural geography, understand risk as “a central cultural and political concept by which individuals, social groups and institutions are organised, monitored, and regulated”.6 For this reason, there are no standard definitions of risk in the cultural field. Risk is primarily what people define as such; thus in order to understand it, one should examine the societies and cultures in which people live.7



II Risk and the Predictability of Accidents


2.5 In pre-industrial societies, people “were never allowed to forget that they were helpless before the fates or the gods or whatever exogenous power struck their fancy”.8 They practised divination, spiritual guidance and prayer to cope with the unknown and with uncertainty. Catastrophes, accidents and hazards were considered the products of external forces, and humans could do little to avert them. That changed during the Renaissance. Belief in divine predetermination, superstitions and fate declined, giving birth to more recognisable concepts of individual responsibility, choice and risk.


2.6 The world was no longer taken for granted, and people realised that previously unforeseeable dangers ought not be a given; they could be adequately managed through calculative methods of assessment. With the evolution of statistics and insurance in industrial societies, risk was made measurable, which distinguished it from uncertainty. Risks could be controlled through prevention and mitigated against by institutions, and they were associated with blame, choice and responsibility.9


2.7 Intrinsically linked to industrialisation and the predictability of risk was the concept of accident. As Paul Virilio writes, “Every technology produces, provokes, programs a specific accident…. The invention of the boat was the invention of shipwrecks…. The invention of the airplane was the invention of the plane crash”.10 By using actuarial models of assessment, the likelihood of future accidents could be estimated. This in turn enabled the application of scientific rationality in risk management techniques to prevent or minimise the side effects of industrialisation, be they aircraft crashes, train collisions, or workplace accidents.


2.8 Furthermore, the ability to calculate risk statistically brought about a transformation in the theory of accident causation and investigation, one that reflected a shift in emphasis from human error to unsafe organisational conditions: statistical analysis was able to demonstrate that, despite the human tendency to disregard personal safety by committing occasional careless, negligent and ignorant acts, accidents unavoidably occur due to underlying systemic causes.11 In this way, risk was no longer tethered solely to individuals, opening up opportunities to relate accidents to organisations and the safety of their systems.12


2.9 During this period of “making the incalculable calculable”,13 a growth in actuarial data and refined risk-classification techniques enabled insurers to evaluate risks accurately and to set the corresponding premiums for liability insurance. This was a reasonable task in the industrial era, given the relatively transparent nature of risks and accidents: they affected a limited group of people in a certain place at a certain time.14 This localisation of risks and their consequences meant that the rate and distribution of losses, too, were measurable, fixed in time and place, and definite.


2.10 As a result, insurers could underwrite risks for reasonable premiums, and organisations could predict their operating costs by comparing the costs and benefits of investment in risk-control measures with the risk of liability for failing to prevent accidents. In that respect, insurance strengthened the position of the insured organisation by “encourag[ing] new ventures and new adventurers”,15 making risky investments with high returns feasible, and increasing overall productivity. Risk was not regarded solely as a danger that should be avoided but as an opportunity to be embraced.16 In this framework, insurance was deemed to be the handmaiden of industry,17 prompting Peter Drucker’s emphatic claim that “without insurance an industrial economy could not function at all”.18



III The Emergence of the Risk Society


2.11 At the beginning of the twenty-first century, risk is primarily regarded as a synonym for danger. The philosopher Ulrich Beck argues that Western societies are currently witnessing the transition to a risk society that is characterised by “a peculiar, intermediate state between security and destruction, where the perception of threatening risks determines thought and action”.19 Unprecedented technological progress and ease of transportation have combined to make use of sophisticated quantitative tools for evaluating risks and enhancing safety and security. At the same time their combination has created a diverse new array of risks that cannot properly be contained, such as nuclear power and weapons, global warming, international financial crises, and terrorism. The defining characteristic of this era’s risks is that their consequences threaten catastrophic events which “transcend traditional boundaries of time and space” and challenge even the most innovative risk management models and liability principles.20


2.12 Such risks defy the law of large numbers because the scarcity of historical data hinders the predictability of the expected average-incurred losses, while their potential to cause loss to many exposures simultaneously threatens to exhaust the capacity of insurance markets. Furthermore, they raise complex issues of causation and discovery, because multiple defendants and causal factors intertwine;21 because experts allow room for ambiguous and sometimes conflicting interpretations of the scientific evidence;22 because after-effects can remain dormant for several years;23 and because consequences are not necessarily confined to any particular country.24


2.13 The notion of a risk society does not necessarily suggest a society that has become more hazardous, rather a society that “is increasingly preoccupied with the future … which generates the notion of risk”.25 Risk and the quest for safety constitute the focal point of the modern risk society, taking over the role played by wealth in industrial societies: “The dream of class society is that everyone wants and ought to have a share of the pie. The utopia of the risk society is that everyone should be spared from poisoning.”26



IV Risk and Objectivity: How Safe is Safe Enough?


2.14 The dichotomy between the objective-scientific interpretation of risk and the socio-cultural construction of risk lies at the heart of the transition from the industrial society to the risk society.


2.15 In a period in which a broadening set of unintentional and intentional man-made risks complemented natural hazards, a number of sophisticated methods for rational decision-making in situations of uncertainty, such as quantitative risk assessment and cost-benefit analysis, were thought to be the only ways to compare risks. As such, they were initially adopted with the aim of determining causation by analysis of past accidents, and later with a focus on prevention.27 As trust in scientific knowledge and technological innovation increased, the superstition-ridden past’s hold on the public waned. Organisations recognised dangers and sought to identify their causes objectively, an undertaking bolstered by a growing body of scientific experiment and theory.28 Uncertainty was transformed into a calculable quantity, which ushered in “an absolute reign of calculative reason”.29


2.16 Not surprisingly, corporations and regulators were attracted by the value-free character of technical risk assessment and were quick to embrace it as a primary decision-making tool.30 The sophistication of quantitative risk analyses, coupled with the newly found esteem of scientific research, offered a feeling of social control and technical efficiency.31 Inherent in the technocratic approach to risk was the belief that accidents can be eliminated, provided that public and private organisations are directed towards addressing issues of mechanical and engineering reliability. Taking aviation as an example, air carriers were vulnerable to the mechanical failures of aircrafts, and efforts were concentrated upon minimising the effects that competing designs and materials had on the safety of operations.32 Human factors were recognised as an essential element of the risk management process, yet the techniques employed to determine the effect of human errors on the overall system’s safety either failed to assess the available data or focused on individual errors in isolation.33


2.17 Rational risk assessment was rooted in the idea of a knowable world. Understanding and preventing risk, like understanding plants and animals, was a matter of science not “intuition and fears”.34 The evolution of technical knowledge gave the impression that social problems and dilemmas could be resolved by reference to ostensibly objective ways of knowing, an assumption that went hand-in-hand with the ever-increasing influence of scientists and technocrats on society’s political life.35 In this new societal paradigm, the aim was to reply to the ubiquitous question of “how safe is safe enough?” before it was polluted by what Mary Douglas described as the dirty side of the subject, namely interests, ideology, politics, and morals.36



V Risk and Subjectivity: How Fair is Safe Enough?



Risk perceptions


2.18 The attempt to quantify risk analysis proceeded as though judgments and expert opinions were subjectivity-free and unbiased, despite a number of studies which demonstrated that experts are influenced not only by theoretical and methodological constraints in their discipline, but also by their social, institutional, political, and cultural interests.37


2.19 This is to argue that scientific models and utility principles cannot be the sole basis for decision making in a societal risk management process. Whereas experts, when making entrepreneurial decisions under uncertainty, concentrate on the acceptability of risks and assign probabilities to them, societal risk managers go a step further and put emphasis on subjective elements: they seek collective consent to technological risks, examine lay perceptions of risks, allocate liabilities to corporations, and invest trust in institutions.38 As such, societal risk managers should provide a bridge between science and the characteristics of society at the boundaries where the two interact. In this enosis, the underlying question of “how safe is safe enough” becomes “how fair is safe enough”, which opens the door to considerations beyond statistical probabilities and engineering models.



Rationality questioned: the risk society explained through the prism of the post-industrial world


2.20 This change of focus towards “qualitativism” in the private and public spheres did not occur in isolation, but was part of the wider movement from an industrial society to a risk society. In contrast to the industrial society, where the quantity of goods marked the standard of living, contemporary risk society measures quality of life “by the services and amenities … which are now deemed desirable and possible for everyone”.39 A post-industrial society, as Daniel Bell noted persuasively in 1976, is not a world where persons are equated to machines; it is predominantly a game between persons where knowledge and information play predominant roles.40 The human element gradually regained its position as the most essential cog in the social machine, and the concern for quality of life as distinct from mere material advancement became a crucial pursuit of modern society.41


2.21 Such a shift was not without consequence. Following the unprecedented economic and technological growth that was achieved at all costs during the industrial era, Western societies had more education, more political freedom, and greater economic security. These improvements allowed citizens to become more critical of entrepreneurial and governmental interventions into both nature and society through technology and economic activities, and also to seek greater participation in the decision-making process. The overall aim this time was to ameliorate the living standards through the pluralisation of knowledge and the social control of technology.


2.22 This trend became particularly apparent after a series of high-profile accidents in the 1970s and 1980s. Though scientifically ruled out as either impossible or extremely improbable, these events generated damages that were unpredicted and difficult to control.42 Examples of such accidents included the nuclear accidents at Three Mile Island and Chernobyl, the Bhopal disaster, the collision of two B747s in the airport of Tenerife, the loss of Pan Am flight 103 in Lockerbie, and the Challenger disaster. These accidents came as immense surprises to the public, whose faith in scientific risk management was irreversibly shaken. The limits of probabilistic risk calculations were laid bare; and, in an ironic twist, science and technology – the very instruments of protection against disaster – became risks that needed to be controlled.43


2.23 The continuous effort to improve quality of life put science and technology under the social microscope, and their side-effects were no longer tolerated as part of the learning process. Instead they were considered impediments to the “good life” promised by society.44 The failure of experts to live up to social expectations and provide a safe pathway to prosperity seriously questioned the quantitative paradigm which presupposed the individual as a passive receiver of risk information.45 In a society where the dissemination of information and knowledge became a central element, the public required an active role in a two-way system. Expert opinion on technological risks was treated as any other kind of information that needs to be filtered by various social stakeholders. Devising a risk assessment model is no longer the end of the story, but instead constitutes the beginning of a process that ideally would establish what Anthony Giddens characterised as a “dialogic or engaged relationship” among experts, technology, and social stakeholders.46


2.24 In this triangular situation, individual and social constructions of risk play a predominant role. Ulrich Beck attributes their supremacy to the special qualities that risks in the risk society attain. They have the potential to “induce systematic and often irreversible harm, generally remain invisible, are based on causal interpretations, and thus initially exist only in terms of the (scientific or anti-scientific) knowledge about them. They can thus be changed, magnified, dramatized or minimized within knowledge, and to that extent they are particularly open to social definition and construction”.47


2.25 The boundary between real and perceived risk is blurred, and arguably it is becoming increasingly irrelevant “whether it is the risks that have intensified, or our view of them…. Because risks are risks in knowledge, perceptions of risks and risks are not different things, but one and the same”.48 This, then, suggests that risks are subject to multiple and often conflicting interpretations depending on the way that risk information is experienced and communicated. Thus to manage risks in the risk society, one has to look beyond the probability X magnitude equation into the ways that people think about risk and experience risk events. These factors have the potential to amplify or attenuate the impact of risk events and in turn to generate social, legal, and economic consequences which require risk management actions from organisations, courts, governments, and the public sometimes contrary to the findings of probabilistic risk analysis.49



Individual perceptions of risk


2.26 Our ability to make rational decisions under conditions of uncertainty is subject to a number of limitations, be they physical, educational, or otherwise.50 To deal with the inherently limited rationality of human nature, people rely on a number of mental shortcuts and rules of thumb – heuristics – which simplify the thinking process and enable them to make decisions quickly.51 Particularly important is the availability heuristic, which suggests that people do not review a large amount of data and scenarios when making probability assessments. Instead they take shortcuts and assess the frequency of an event by how quickly instances and images of similar events come to mind.52 In this case, high-profile risk events such as aircraft crashes are easily available, and their frequency is regularly overestimated, especially if the person has direct experience as a victim or indirect exposure through the media.


2.27 Closely associated with this general heuristic is the way laypeople differentiate between risks and prioritise them. Psychological studies have identified a number of qualitative characteristics that can influence a person’s assessment of risks and his or her willingness to accept them.53 Among these factors is the tendency for people to place more emphasis on elements of controllability and consequences of a risk event: laypeople often overestimate the risk of low-probability, high-consequence events where there is simultaneous loss of life, such as aircraft accidents, which is mainly due to the greater memorability and imaginability of such events,54 and the combination of dread and lack of control.55 In that respect, they perceive driving as safer than flying, mainly because car passengers can exert influence on decision-making in ways that no air passenger can, and also because they are more accustomed to the technology in use.56


2.28 These factors can explain how individuals perceive risk events and their fears concerning high-risk technologies, but further elements must be incorporated into the analysis to demonstrate the interactions between individual considerations and socio-cultural values which will eventually shape “the public experience of risk”.57 These factors include organisational affiliations; community dynamics; mass media; and social interactions with family, friends, fellow workers, and neighbours.58 In other words, the individual must be observed in a milieu in order to examine factors that have the potential to influence fears, perceptions, and values. In that respect, the social-amplification-of-risk framework developed by Roger Kasperson et al. can provide useful insights.



Social perceptions of risk


2.29 The social-amplification-of-risk framework is based on the premise that risk events, which could range from accidents, disease outbreaks, and terrorist attacks to the publication of an investigation report, send out signals which “flow through first various sources and then channels, triggering social stations of amplification, initiating individual stations of amplification, precipitating behavioural reactions. These engender ripple effects, resulting in secondary impacts”.59 The secondary and tertiary effects that emerge extend far beyond the direct harm that victims experience, and may result in substantial indirect impacts, such as litigation against the responsible organisation, loss of reputation and sales, increased regulation for an industry, and so forth.60


2.30

Only gold members can continue reading. Log In or Register to continue