Researchers Moral




© Springer Science+Business Media Dordrecht 2015
Deborah Mascalzoni (ed.)Ethics, Law and Governance of BiobankingThe International Library of Ethics, Law and Technology1410.1007/978-94-017-9573-9_18


Making Researchers Moral


Why Trustworthiness Requires More Than Ethics Guidelines and Review


Linus Johnsson , Stefan Eriksson , Gert Helgesson  and Mats G. Hansson 


(1)
Uppsala University, Uppsala, Sweden

(2)
Karolinska Institutet, Solna, Sweden

 



 

Linus Johnsson (Corresponding author)



 

Stefan Eriksson



 

Gert Helgesson



 

Mats G. Hansson



This chapter has been already published as: Johnsson L., Eriksson S., Helgesson G. and Hansson M.G. 2014. Making researchers moral: Why trustworthiness requires more than ethics guidelines and review. Research Ethics 10:29–46. We kindly thank the publisher for allowing the reprint.



1 Prescript


In this paper we discuss how the individual researcher’s moral responsibility for her work relates to research ethics as an extra-legal regulatory framework. Though we address biomedical research in general rather than biobank research specifically, much of what is said here is equally relevant in both contexts. First, informed consent, here as elsewhere, is taken to be morally required, and many authors hold high expectations regarding its leveraging power. In contrast, public awareness of biobank research is rather low, and people tend to be more concerned about matters that informed consent procedures rarely address, such as the actual goals of research and how benefits are to be shared. Second, much of what is going on behind the scenes in biobank research obviously falls outside the scope of ethics review, perhaps more so than in traditional biomedical research. Ethical reflection on one’s research must therefore be an ongoing process rather than a one-shot affair. Lastly, legal and ethical documents governing biobank research continue to proliferate at an alarming rate, highlighting the need for a discussion on how researchers are supposed to orient themselves in an ever-changing and confusing ethico-legal landscape. The link between this paper and biobank research is elaborated in greater detail in Linus Johnsson’s thesis, available on the Uppsala University website.


2 Introduction


Research ethics, unlike the natural sciences, produces normative output—in essence, statements on what ought to be done. Still an academic discipline, it has thus quite naturally come to double as the framework for extra-legal regulatory systems, much like jurisprudence is the foundation of legal regulation. It is tempting to assume that to be effective in guiding action, ethics must be formalised in the same manner, through steering documents, overseeing bodies, and formal procedures.

Today, the number of ethical guidelines and professional ethical codes intended to guide research is increasing at a tremendous pace (Eriksson et al. 2008). We also expect more of them: The Declaration of Helsinki, for instance, has gone from modestly declaring itself “only a guide” (World Medical Association 1964) to forcefully asserting that “No national or international ethical, legal or regulatory requirement should reduce or eliminate any of the protections for research subjects set forth in this Declaration.” (World Medical Association 2008) General principles have partly given way to enumerations of concrete rules, for instance with regard to what pieces of information should be disclosed to research participants. In some contexts, ethics review has increasingly become a matter of scrutinising informed consent forms (Edwards et al. 2011; Coleman and Bouesseau 2008; Hoeyer et al. 2005a).

In this paper we argue that ethics review and guidelines are insufficient to ensure morally responsible research. In some circumstances, regulatory research ethics can be more of a hindrance than a help. We begin by describing the paradigm of institutionalised distrust that currently informs it. Next, we argue that past atrocities cannot be drawn upon to back claims that research must be more strictly regulated unless what is proposed is a necessary or efficient means to prevent future ones. We thereafter consider the main limitations of ethics review and guidelines. With regard to ethics review, requirements of consistency invites rigidity; lack of reliable indicators of a project’s moral soundness may lead to idiosyncratic decisions; and the fact that committees depend on the moral agency of investigators is often overlooked. Strict adherence to guidelines is also no guarantee that moral responsibilities have been discharged. In fact, if guidelines are used as standards against which performance is measured, responsible conduct will occasionally be punished and blind rule-following praised.

In the next-to-last section, we identify some particular risks with the current system. First, ethics review that focuses strongly on some ethical aspects of research risks diverting attention from other morally significant issues. Second, guidelines with a low level of abstraction—that is, those orienting towards rules rather than principles—encourage a checklist-like approach to ethics that makes individual moral deliberation appear redundant, eventually leading to heteronomy of action. Third, when rules contradict (which they often do), they fail to provide guidance to researchers, and may even alienate them. The irresponsible conduct that follows tends to precipitate tighter regulation, thus perpetuating the vicious circle. Consequently, though substandard behaviour in the short term is indeed worrying, the moral competence of researchers in the long term should be cause for even greater concern.


3 Institutionalised Distrust


Social scientists have described the drive toward tighter regulation and systems of oversight as an expression of the ambivalence and insecurity that pervades post-modern society (Miller and Boulton 2007). People, it is argued, can no longer rely on social norms to govern the actions of others; to dare cooperate, they must look for other guarantees. Where developing a personal relationship with the other is not feasible, one must then either find a trusted person to vouch for the other, or fall back on formal structures such as laws, rules and contracts—backed, of course, by appropriate mechanisms of sanction.

To the degree that this picture accurately describes the societies we live in, biomedical research is in trouble. If trust depends on social norms, the researcher will—to most people at least—count as an unknown other who should not be trusted. In some contexts, health care personnel with whom potential research subjects are more familiar can act as “proxies” or guarantors (Johnsson et al. 2012), but this is not always a viable option. It could be argued that if researchers are either insufficiently trusted or insufficiently trustworthy, we ought to at least make their actions more predictable so that public support of biomedical research may continue. This normative position forms the essence of the paradigm known as institutionalised distrust (Sztompka 1998; Hall 2005). This paper focuses on two of its mechanisms: oversight and formal rules. By giving an overseeing body—in our case, research ethics committees (RECs)—the task of distrusting researchers, the public will not have to; they can go on cooperating, confident that the necessary control systems are in place. But to ensure effective oversight and maintain the legitimacy of the overseeing body, we also need clear rules or performance standards against which deviations can be spotted. Guidelines, once intended to provide guidance, are today designed with this regulatory need in mind.

Institutionalised distrust resembles distrust between people in that it implies taking precautions, doing checkups, and developing contingency plans in order to minimise risk. But it rests on instrumental rather than empirical standards of justification: Whereas distrust between people is warranted by evidence of untrustworthiness, institutionalised distrust is rational insofar as it is likely to make the research enterprise more trusted and—perhaps—more trustworthy. This must be borne in mind whenever past experiences are used to back future policies.


4 The Problem to the Solution


If the Nuremberg Code is the foundation of bioethics, the Nazi atrocities that preceded it serve as the precautionary tale. But what moral does it tell? It is commonly claimed that it teaches us the necessity of informed consent (Goldworth 1999). As we already know that informed consent is important, we may fail to notice the tenacity of this claim. Granted, involuntary participation is impossible insofar as the ideal of informed consent is in fact realised. But it does not follow that merely requiring that informed consent be obtained would have been effective. A legal requirement of voluntariness was in place already in 1931, but did little difference to the victims (Hoeyer 2008). Arguably, no amount of research regulation will protect minorities in a totalitarian state, let alone one embracing Nazi ideology.

Now consider a more recent large-scale transgression of human rights, the Tuskegee syphilis study. The subjects—exclusively African-Americans—were led to believe that they were receiving treatment. This was a lie: Despite the risks of untreated syphilis being repeatedly proven throughout the study and penicillin being readily available, they never got any. Through carefully placed letters to other physicians in the vicinity, the investigators even prevented the subjects from being treated elsewhere. Tragically, the Department of Health, Education and Welfare concluded in its Final Report that where the investigators had failed was in obtaining informed consent from their research subjects (Brandt 1978). Ignored or overlooked was the fact that even before the age of informed consent, what transpired would have counted not as morally problematic, but as obviously racist and evil.

Another lesson ostensibly taught by these examples is that researchers are unreliable unless watched. But we must not forget that the Nazi atrocities, though invented by individuals, were perfectly in line with contemporary public policy. Would an REC, had there been one, have condemned these experiments, or applauded them? As for the Tuskegee case, there was oversight. A committee at the Communicable Disease Center (now the Centers for Disease Control and Prevention) decided in 1969 that the study was to be continued—casting some doubt on the “mad scientist” account. Only when details of the study were leaked in 1972 was the project forced to a halt (Brandt 1978). In other words, it took a whistleblower—an individual—to end what the authorities let pass.

By virtue of their bestiality, the Nazi and Tuskegee cases remain persuasive even when badly told. But this is also what makes them miss the mark with regard to research regulation and oversight. The simple fact that some people are capable of murder does not make it reasonable to view every passer-by as a potential murderer. Similarly, atrocities committed in the name of research provide us with no good reason to distrust researchers across the board. What they do point out is what happens when abuse and exploitation is condoned or even encouraged by the society. As with other major crimes, state-sanctioned or not, the solution is hardly to be found in better monitoring.

A better chosen example to illustrate the need for research regulation would be one that points out genuine and justified uncertainty regarding researchers’ behaviour. It has been observed, for instance, that researchers occasionally impose more than reasonable risks on research subjects (Savulescu 2002). The question is: Should this count as a reason to monitor them even more closely, or to question the efficacy of such measures in cultivating trustworthiness?


5 Limitations of Ethics Review


Independent review by RECs has been argued to serve a key role for maintaining public trust in biomedical research (Hansson 2005). Its success in this regard may depend on how it is presented. It has been noted in other contexts that abundant use of corrective measures breeds further distrust, presumably by implying that there is much to correct (Koski 2007). For similar reasons, other authors have argued that institutionalised distrust should remain “in the shadows, as a distant protective framework for spontaneous trustful actions.” (Sztompka 1998) What ethics review does for the trustworthiness of research is a different, and for our purposes more important, issue. Ideally, it will help prevent badly designed or otherwise morally problematic research from being carried out. But here, too, are some important limitations to consider.


5.1 Rigidity


The legitimacy of RECs as extra-legal regulatory bodies hinges on their ability to reach rationally justifiable verdicts. This implies, first, a degree of consistency over time and, second, that inconsistencies that do arise can be reasonably attributed to moral progress. Guidelines rarely provide answers clear-cut enough to stave off the threat of indeterminism. For this reason, RECs have been found to rely more on local precedents than on theoretical frameworks (Stark 2012, 165). Through their “institutional memory”, RECs are able to embody norms and carry them over to future generations of researchers. But institutional memory can also become a burden that impedes progress. Demands of consistency makes it impossible to improve one’s standards without calling past decisions into question. RECs also become less likely to critique societal norms, which undermines their position as moral authorities (if not as regulatory bodies). For instance, in a society infused with racist ideology, one could hardly trust an REC to reject a Tuskegee-like project. More generally, we cannot trust RECs to react to wrongs that common morality does not conceive of as such, or to abandon principles that no longer protect important values.


5.2 Idiosyncrasy


A main task of RECs is to weigh benefits and risks of proposed projects. The metaphor of weighing lends a flavour of objectivity to the procedure, as if it actually involved a set of scales. In reality, reaching consensus is very much an organic process. No matter how competent its members, an REC is not always ideally positioned to evaluate the scientific merits of research projects, especially when they deviate from the paradigm (Fistein and Quilligan 2011). It is tempting therefore to distinguish between “ethical” and “technical” issues, where the former but not the latter would be the responsibility of RECs (McGuinness 2008). But since badly designed research is by definition unethical, this position is difficult to justify.

Worse, arguments passed during REC meetings may not always draw on observations that are rationally related to what they are supposed to assess. In an American study of IRBs (institutional review boards), references to embodied, firsthand knowledge—sometimes even personal life experiences—often turned out to be more persuasive than scientific facts, perhaps because they were harder to challenge directly (Stark 2012, 37). With the independency from research institutions that has become the norm in many countries, RECs usually lack personal knowledge of the applicants and so are unable to keep an extra eye on potential troublemakers (Kerrison and Pollock 2005). Though this was arguably never their responsibility, the fact remains that at least some RECs regard judging the character of the researcher a crucial task. Some come to resort to surrogate measures such as her spelling abilities (Stark 2012, 15–18). It is reasonable to suspect that the diversity in how RECs judge projects—which poses a great problem for researchers—reflects such idiosyncrasies rather than, as is often claimed, local community values (Klitzman and Appelbaum 2012).


5.3 Dependency


A final limitation of RECs consists in the fact that their trustworthiness depends on that of researchers. This is so for several reasons. First, researchers are not merely the objects of evaluation; especially when new areas of research are broached, their suggestions are sometimes elevated to local precedents (Stark 2012, 49–50). Second, RECs commonly draw at least some of their members from the research community. Third, as RECs are usually not required to ensure that the research protocol is actually followed—which would in any case be prohibitively time-consuming—they will not be able to prevent harmful research unless researchers can be trusted to do what they have proposed to do and nothing else. Fourth, even the most diligent of RECs will sometimes fail to identify risks associated with a proposed project. When both the researcher and the REC fall short in this respect, people might be harmed (Savulescu 2002). In addition, the time and effort that some RECs put into “wordsmithing” informed consent documents (Klitzman and Appelbaum 2012) may leave them little time for such double-checking. The responsibility ever resides with the researchers.

It has been observed in other contexts that in hierarchies of overseers and subjects, distrust tends to propagate upwards (O’Neill 2002, 130–133). The present case seems to be no different: Already voices are heard asking how RECs are to be monitored (Coleman and Bouesseau 2008). If one assumes the moral integrity of researchers to be compromised, such anxiety is understandable. Nevertheless, in the face of the problems we have pointed out, second-order monitoring would be largely unhelpful.


6 More Guidelines are Needed?


Just like ethics review formalises ethical deliberation, guidelines formalise its principles. They are crucial to, but do not imply, institutionalised distrust. To the contrary, there are at least three conceivable normative positions on what they are supposed to achieve. The first two, it turns out, are untenable, while the third requires us to rethink how guidelines are to be written.


6.1 Steering


The first normative position is based on a perceived need for accountability, and thus for steering documents. To preclude corruption, it conceives of a division of labour between legislators, arbitrators (RECs) and subjects (researchers). Just like an engineer fine-tunes the workings of intricate machinery, the rule-maker works with constraints and springs, trying to devise rules that cover any contingency and incentives persuasive enough to ensure compliance. To the degree that the rules require interpretation, RECs have the final say. But the optimal document will be one containing nothing but propositions the truth value of which different evaluators will consistently agree on, regardless of their domain knowledge; in other words, a checklist. Guidelines have moved some way toward this ideal. Several items in recent revisions of the Declaration of Helsinki—for instance, those listing the required contents of research protocols and informed consent forms—lend themselves to box-ticking (World Medical Association 2008).

Only gold members can continue reading. Log In or Register to continue