Which part of the research conclusion is a realistic explanation of what your research findings might mean and the fact that would arise if those circumstances were met?

  • Journal List
  • HHS Author Manuscripts
  • PMC2700754

J Philos Sci Law. Author manuscript; available in PMC 2009 Jun 23.

Published in final edited form as:

PMCID: PMC2700754

NIHMSID: NIHMS44451

Abstract

This article examines conflicts of interest in the context of scientific research related to regulation or litigation. The article defines conflicts of interest, considers how conflicts of interest can impact research, and discusses different strategies for dealing with conflicts of interest. While it is not realistic to expect that scientific research related to regulation or litigation will ever be free from conflicts of interest, society should consider taking some practical steps to minimize the impact of these conflicts, such as requiring full disclosure of information required for independent evaluation of research, prohibiting financial relationships between regulatory agencies and the companies they regulate, and banning payments to expert witnesses for specific research results, testimony or legal outcomes.

Keywords: conflicts of interest, research, science, bias, trust, regulation, litigation, pharmaceutical industry

1. Introduction

Conflicts of Interest (COIs) have always occurred in scientific research, but they did not catch the public’s attention until the 1980s, when the relationships between private companies and academic institutions intensified, due, in part, to legislative changes that encouraged technology transfer from the public to the private sector, as well as innovation and growth in biotechnology, pharmaceuticals, computers, and other key technologies. Concerns about relationships between private companies and academic institutions reached a crescendo in 1999, when Jesse Gelsinger died during a Phase I gene therapy experiment at the University of Pennsylvania. Food and Drug Administration (FDA) officials investigating the case found problems with reporting adverse events (AEs) in gene therapy studies. They also found that Gelsinger had not been adequately informed about the potential risks of the treatment as well as the researcher’s and University’s financial interests. Pennsylvania researcher James Wilson, the principal investigator in the experiment, held 30% of the stock in Genovo, a company that contributed $4 million a year to a University of Pennsylvania non-profit foundation, which sponsored the research. The University also held stock in Genovo. Additionally, Wilson held 20 patents on various gene therapy methods. Gelsinger’s death had a direct and immediate impact the research community: the FDA reexamined the system for reporting AEs research and the National Institutes of Health (NIH) and several professional associations developed new guidelines for dealing with COIs in research.[1]

Although most commentaries and guidelines concerning COIs in science focus on research sponsored by private companies, COIs can also arise in other contexts, including research sponsored by government agencies or political organizations, and research related to legal proceedings, such as litigation or regulation. For example, a scientist, who has been hired by a plaintiff to perform experiments designed to demonstrate that a company’s product can cause nerve damage, would have a COI.[2] Likewise, a scientist conducting research for a pesticide company to submit to the Environmental Protection Agency (EPA) in support of the company’s application for pesticide registration would also have a COI. A forensic scientist, working in concert with law enforcement officials on criminal cases, would also have a COI.[3] Additionally, although most discussions of COIs focus on financial interests, other interests, such as professional, personal, political, or legal interests, can also create COIs.

This article will examine COIs in the context of scientific research related to regulation or litigation. The article will define COIs and related terms, consider how COIs can impact research, and discuss different strategies for dealing with COIs. While it is not realistic to expect that scientific research related to regulation or litigation will ever be free from COIs, society should consider taking some practical steps to minimize the impact of COIs, such as requiring full disclosure of financial relationships as well as information required for independent evaluation of research.

2. What is a COI in Research?

To have a better understanding of our subject matter, it will be useful to set forth some key definitions. To frame the discussion, let’s consider some cases that most people agree would constitute a conflict of interest:

  1. A father is asked to referee his daughter’s soccer game;

  2. A judge presides over a trial involving a close friend;

  3. A researcher owns $50,000 worth of stock in a medical device company that makes a product that he is testing;

  4. A pharmaceutical company invites a physician to an all-expense-paid seminar in the Aspen, Colorado to learn about its products;

  5. A county commissioner votes on a zoning law that will increase the value of his land by $10,000;

  6. A woman, who is adamantly opposed to capital punishment, sits on a jury in a murder case in which the defendant could receive the death penalty.

  7. A scientist is asked to review an article that reports results that could refute a hypothesis that she has defended for over twenty years.

In each of these cases, a person has interests that may interfere with the performance of his or her ethical or legal duties. In case 1, the father has a duty to referee impartially, but he may have a difficult time doing this because his daughter is on one of the teams. In case 4, the physician has a duty to prescribe medications to promote the health of his patients, but he may be tempted to prescribe medications to promote the interests of the drug company. The interests in several of the cases (cases 3–5) are financial interests, but the interests also include personal interests (case 1 and 2), political interests (case 6), and professional interests (case 7).

COIs can interfere with the performance of duties in a couple of ways. First, interests may tempt people to deliberately violate their duties. For example, the researcher who owns $50,000 worth of stock in a company might decide to not report data that tends to show that the device is unsafe in order to boost the value of his stock. COIs in research can lead people to deliberately act against scientific norms, such as honesty, openness, and social responsibility. Second, interests may also affect people at a subconscious level by undermining reasoning, perception, and judgment.[4] For example, the father in the soccer game may try his best to act impartially, but he may still be unable to do this, because his interests have clouded his thinking. We might say that the father’s judgment is biased, untrustworthy, or suspect. Putting all this together, we can define a COI for a scientific research as follows:

A researcher has a conflict of interest if and only if he/she has personal, financial, professional, political, or legal interests that have a significant chance of interfering with the performance of his/her or ethical or legal duties.

A few comments about this definition are in order. First, although most discussions of COIs focus on financial interests, it is important to realize that people have many other interests, such as personal, professional, or political interests. In some situations, these other interests can have a large impact on a person’s judgment and decision-making. While it is important to recognize the COIs may not always involve financial interests, it may still be the case the financial interests tend to create more problems for scientific research than other interests. The corrupting effect of money may be greater than the influence of the quest for prestige, power, or fame. This is an empirical issue that will not be explored further here.

Second, a person does not need to actually violate his or her ethical or legal duties in order to have a COI. What matters is that there is a significant chance that the person will fail to perform his/her duties as a result of his or her interests. But what counts as a significant chance? This is not an easy question to answer, since it involves a line-drawing problem. On the one hand, a 0.5% or less chance would appear to set the threshold too low, since people would have COIs even when there is only a remote possibility that their interests will interfere with their legal or ethical responsibilities. Almost everyone would have COIs all the time, which is absurd. On the other hand, a 50% or greater chance would seem to set the threshold too high, since people would not have a COI unless they are likely to violate their ethical or legal obligations as a result of their interests. Few people would have COIs, which also doesn’t make much sense. So the threshold should be set somewhere between a 0.5% chance and a 50% chance. The problem of setting the appropriate threshold for COIs is similar, in some respects, to determining an acceptable error rate for a scientific test or method. The acceptability of an error rate depends, in large part, on the consequences of being wrong when relying on a test or method.[5] The more severe the consequences of being wrong, the lower the acceptable error rate should be. Thus, in some situations, the COI threshold should be as low as 1%, whereas in others it may be as high as 10%. For most scientific research, a 5% threshold would be appropriate.[6]

Third, this definition does not include situations in which it only appears that a person will not perform his or her ethical or legal duties. The mere appearance of impropriety is not sufficient for having a COI. It is important, therefore, to distinguish between a COI and the appearance (or perception) of a COI. Some scholars and organizations do not make this distinction.[7] In some situations, such as government employment, an apparent COI may be as serious as a real COI. When how something appears matters a great deal, even the appearance of a COI raises serious concerns. For example, in public service and government work, how things appear matter a great deal to people. In scientific research, however, apparent COIs are not as significant a problem as real COIs. In science, what matters most is not how a situation appears to the public but how a situation may affect the integrity of the research record.

Institutions and organizations, such as universities, medical centers, journals, professional associations, and government agencies, can also have COIs.[8] Institutions and organizations, like individuals, have legal and ethical duties. Even though institutions do not make conscious choices, they make collective decisions that can be influenced by financial or political interests. Some examples of situations that could constitute institutional COIs include:

  • A university conducts a clinical trial sponsored by a private company. The university owns $100,000 worth of stock in the company, and has licensed several inventions to the company.

  • The Director of one of the National Institutes of Health’s institutes submits one of his human research protocols to the institute’s institutional review board.

  • A government agency responsible for regulating a company’s products receives financial support from that company.

  • A professional organization that provides financial support for a peer-reviewed journal attempts to influence editorial decisions.

  • A hospital receives substantial donations from a company that is sponsoring clinical trials conducted at the hospital.

With these examples in mind, we can define institutional COIs as follows:

An institution has a conflict of interest if and only the institution has financial, political, or legal interests that have a significant chance of interfering with its performance of its ethical or legal duties.

It is also important to distinguish between COIs and conflicts of duty and conflicts of commitment. A conflict of duty involves a conflict between two different legal or ethical duties. Suppose, for example, that an epidemiologist conducts a study of mental illness in a small community and that publication of some of the results may stigmatize the community or cause embarrassment. The epidemiologist would have to resolve the ethical dilemma of whether to follow her ethical obligation to publish all of her results or her obligation to protect the community from harm. A conflict of commitment occurs when a researcher has different commitments that place various demands on his or her time and effort. Suppose, for example, that an engineering professor spends 20% of his time consulting for different companies. To deal with this conflict, the professor would need to manage his time to ensure that his consulting work does not interfere with his obligations to the university, such as teaching or research. Even though conflicts of duty and conflicts of commitment are not the same as COIs, a conflict of duty or commitment could also involve a COI, if there are financial (or other interests) at stake that may compromise the researcher’s judgment. Even so, it is important to recognize that COIs are different from conflicts of duty or obligation.

3. Why Worry about COIs?

There are at least two reasons to be concerned about COIs in research: 1) COIs can compromise the integrity of the research; and 2) COIs can undermine the public’s trust in science. There is a growing body of research demonstrating how financial interests can compromise the integrity of research. Studies have shown that there is a strong positive correlation between the source of funding and the results of research: most published studies sponsored by pharmaceutical companies tend to favor the companies’ products.[9] The fact that companies publish studies that favor their products is not at all surprising. What is troubling, however, is that there is also evidence that companies sometimes bend or break the rules of science to achieve favorable results. There are several different ways that companies (or others) can manipulate the research process to suite their purposes.

Funding

The first way that companies can manipulate the research process is by funding research to support their products. In 2004, U.S. pharmaceutical and biotechnology companies spent nearly $50 billion on research and development (R& D), which is $20 billion more than the budget of the National Institutes of Health (NIH), the main U.S. government agency that sponsoring biomedical research. Private industry in the U.S. spent over $200 billion on R & D, or about twice as much the U.S. government’s R & D budget, including defense-related R & D.[10] The huge amount of private money flowing into R & D often creates a serious imbalance of funding. A pharmaceutical company may be able to afford to conduct several different clinical trials to support its new product, which might lead to a dozen or more publications. If the government sponsors only one clinical trial comparing the company’s product to a competitor’s, which leads to 4 published articles, then 75% of the articles concerning the product would be funded by the company. While there is nothing inherently wrong with a company sponsoring research on its products, the company can skew the research record if its funding is not counter-balanced by independent sources of funding, such as the government. To help offset the effects of private funding of clinical research, government agencies should invest more money on studies that evaluate the safety and efficacy of health products that have been approved and are on the market.[11]

Research Design

A second way that a company may try to bias their research is to distort the design of a study toward yielding particular result. For example, one can avoid demonstrating an adverse effect of a drug by failing to give a study enough statistical power to measure the effect or by not measuring the effect in the first place. For example, suppose that a company is trying to prove that its analgesic medication is safe and effective. Suppose the company measures many different clinical and biological outcomes but does not the drug’s effects on potassium levels in the bloodstream. If the drug can cause dangerously low potassium levels, the adverse effect will not be reported to the FDA. Or, if low sodium levels are accidentally observed in some research subjects that have other problems, such as muscle weakness or cardiac problems, then these accidental observations will not be useful, because there would be no way of determining whether the drug caused the low sodium levels since there would be no controls (i.e. other subjects where sodium levels are measured).[12]

Data Falsification or Fabrication

A third way that a company can bias its research is to falsify or fabricate data. Studies of government-sponsored research show that the incidence of research misconduct is probably very low, but still significant. Various studies place the incidence of misconduct at less than 1% to 9%.[13] There are no studies on the incidence of fraud in privately funded research, although circumstantial evidence suggests that falsification and fabrication probably also occur in the private sector, since the financial and organizational pressures that can encourage misconduct in the public sector, i.e. the pressure to produce results, also occur in the private sector.

Data Analysis and Interpretation

A fourth way that a company may biases its results is through its analysis and interpretation of the data. In one well-known case, Boots Pharmaceuticals challenged Betty Dongs’ analysis and interpretation of the data from a study she conducted on its hypothyroidism drug and tried to prevent her from publishing her research. Boots had funded Dongs’ research in an effort to show that its drug was superior to generic alternatives. However, Dong demonstrated the opposite result. Furthermore, she also proved that U.S. government could save millions of dollars each year by requiring that Medicaid patients take a generic hypothyroidism drug instead of Boots’ drug.[14]

Suppression of Results

A fifth way that a company may try to bias research is to suppress unfavorable results. Private companies often do not publish their research, and they often require researchers to sign agreements giving the company the right to review, delay, or stop the publication of research.[15] In the Dong case, mentioned earlier, Boots tried to prevent Dong from publishing her research. In another well-known case, Merck Pharmaceuticals had sponsored research that demonstrated that its anti-inflammatory drug, Vioxx, could cause cardiovascular problems, but the company did not publish these results for several years. In 1999, Merck sponsored a post-marketing study of Vioxx. In 2000, the company reported the results of the study to the FDA, which showed that Vioxx users had an increased risk of heart attacks. Merck published a study of Vioxx that reported drug’s benefits but not report its cardiovascular risks. In 2002, the FDA required that Merck revise the warning label on Vioxx to mention its cardiovascular risks. In 2004, another study of Vioxx showed that the drug doubles the risk of a stroke or heart attack when taken for 18 months or more. Merck soon took the drug off the market and began defending itself against thousands of lawsuits.[16]

Public Trust

The other main reason to be concerned about COIs in research is that they can undermine the public’s trust in science.[17] Scientists can ill-afford to lose the public’s trust, since public trust in science is essential for securing government funding of research and education, for avoiding burdensome regulations of research, for having access to resources and facilities, and for recruiting participants in clinical trials and other scientific studies. The public’s trust rests on the expectation that scientists will adhere to ethical, professional, and legal standards when conducting research. COIs can weaken the public’s trust in science by giving laypeople and politicians reasons to believe that researchers are unethical, unaccountable, or incompetent. Since apparent COIs can create the impression that researchers are behaving inappropriately, they can also have a detrimental impact on public trust. Hence, it is important for researchers to deal with apparent COIs.[18]

4. Strategies for Dealing with COIs

There are three basic strategies for dealing with individual and institutional COIs: disclosure, management, and prohibition.[19] Disclosure involves informing relevant parties about the interest that creates the COI or may cause the appearance of a COI. Relevant parties would include people who would want to know about the interest, such reviewers and readers of a journal article, supervisors, and research participants. The virtues of disclosure are openness and transparency: disclosure allows other parties to learn about the interest and to make decisions based upon their evaluation of its significance. A reader who learns about an author’s financial interests related to an article may be subject the author’s data, analysis, and conclusions to additional scrutiny. A potential research participant who learns about an investigator’s financial interests related to a clinical trial may have second thoughts about participating in the trial or may ask for more information than he otherwise would have asked for. Additionally, disclosure can help promote trust by avoiding the inadvertent discovery of undisclosed COIs. A person who discovers that a researcher has not disclosed a COI may become very distrusting and suspicious. He or she may wonder whether the researcher has something to hide, and may regard the researcher as biased or corrupt.

Sometimes a situation requires more than disclosure but is not so serious as to require prohibition. In some situations, the best strategy may be to disclose the conflict and use procedures and processes to manage it. For example, suppose that a researcher at a university medical center has started up a company to develop a clinically useful product—a blood substitute that can be used in medical emergencies. The researcher owns 10% of the company’s stock and serves on its research advisory board. The researcher invented the device and was issued a patent on it. He has transferred patent ownership to his university, which is licensing the invention to the company. The university also owns 10% of the company’s stock. The company is planning to conduct clinical trials involving its product but has not yet named possible trial sites. Obviously, this situation could create numerous COIs and apparent COIs, but it is not obvious that the best course of action would be to prohibit the researcher from having a close relationship with this company, since his expertise may be required to develop and market the product, which may greatly benefit the public. At this point at least, this is situation that should be watched closely but not prohibited. The university could deal with this situation by forming a committee to oversee the researcher’s relationships with the company and his research. The committee could review licensing agreements, confidentiality agreements, contracts, and other documents that might raise concerns.

Suppose that the researcher tells the committee that the company would like him to serve as vice president for research and would like him to oversee a clinical trial site at the university medical center. One might argue that these proposed relationships and activities are so problematic that they cannot be adequately managed and prohibition may be required. If a researcher conducts a clinical trial at an institution in which he and the institution have financial interests related to the product being developed, the chances are good that ethical rules related to research will be bent or broken. For example, IRB members may face some pressure to approve the researcher’s protocols, even when they have reservations about consent and safety issues. The company may delay or stop the publication of negative results. Even if none of these unfortunate circumstances occur, the relationships between the company, the university, and the researcher would create the appearance of bias. One way of dealing with this situation would be to allow the researcher to be vice director of research for the company, but not allow the company to use the university medical center as a clinical trial site. Another solution would be to allow the research to take place at the university medical center but to require the researcher not take the position as vice president for research for the company and to sell his stock in the company. The university medical center would also need to sell its stock in the company. There is a catch: both of these solutions would probably cut into the university medical center’s revenue, which is one reason why many universities in similar situations would not prohibit the relationships that create these COIs.

Prohibition is the most extreme way of dealing with COIs. How can one decide when prohibition (as opposed to someone other strategy) is appropriate? I suggest that researchers consider the following factors when deciding how to deal with COIs:[20]

  1. The significance of the COI. What is the probability that the interest(s) will affect interfere with researcher’s performance of his/her duties? As noted above, if the probability is very low, the situation may not create a COI at all but only the appearance of one. As the probability increases, the situation becomes more problematic.

  2. The ability to manage the COI. Some COIs can be managed through some form of independent oversight or review, but others cannot. The peer review system operates as a form of control over the biases the may affect scientific publications. If a researcher has a financial interest related to an article that he or she is submitting for publication, reviewers can evaluate his or her methods, analysis, and results. Once the article is published, others can replicate the results. There are very few independent checks on the judgments of peer reviewers, however. The opinion of one reviewer can significantly affect the fate of a grant proposal or an article. Hence, it is very important to prohibit COIs in peer review. There are other forms of independent oversight besides peer review, such as the judicial system, government agencies, independent auditors, and so on.

  3. The consequences of prohibiting the COI. Sometimes prohibiting a COI does more harm than good, because it denies science or society of important benefits. For example, prohibiting academic scientists from serving on industry advisory boards might enhance trust in academic research, but it might also deny industry important advice and reduce collaboration between industry and academia. Prohibiting universities from investing in start-up companies might make it very difficult for companies to acquire enough capital to get off the ground.

To determine the best course of action, one should consider these three factors in light of the relevant facts. Decisions should be based on a thorough analysis of the financial, scientific, social, and legal circumstances concerning the case. There will be a strong case for prohibiting (or preventing/avoiding) a COI when the significance of the COI is high, the ability to manage or control the COI is low, and the consequences of prohibiting the COI are not very bad.

So far this article has focused on COIs in scientific research in general. The remainder of the article will discuss COIs in the context research related to regulation or litigation. Research in the context of regulation or litigation is no different, in principle, from research conducted in other contexts, such as academia, because the norms and methods of science are the same, whether one is studying the atom of studying the safety of pesticide. However, the financial interests at stake in regulation or litigation may be more powerful than those in other research contexts, because companies can stand to gain or lose hundreds of millions of dollars depending on the outcome of the legal decision. Hence, financial COIs can be a major concern for research related to regulation or litigation. The U.S. has a variety of federal regulatory agencies that encourage or require companies to conduct scientific research, including Environmental Protection Agency (EPA), the Occupational Safety and Health Administration (OSHA), the Patent and Trademark Office (PTO), the U.S. Department of Agriculture (USDA) and the Federal Trade Commission (FTC). Most U.S. states and other countries also have laws and regulations designed to protect the environment or public health.

We have already alluded to research related to regulation in the discussion of research sponsored by pharmaceutical or biotechnology firms, since these companies are required by FDA regulations to submit data to the FDA in order gain approval of their new products. So, the companies conduct research in order to comply with regulations. Although many companies may view FDA regulations as a hindrance to business, the regulation serve the important purpose of protecting public health and safety. Before the U.S. adopted laws and regulations pertaining to the safety of medications, companies marketed their products without taking the time to prove that they are safe or effective. FDA regulations protect the public from snake oil salesmen and quack cures. The potential for bias also exists in research conducted to comply with or influence other regulatory decisions.[21] For example, private companies that want to market pesticides in the U.S. must comply with EPA regulations. The EPA establishes an acceptable level of human exposure to a pesticide based on data from animal dosing studies. The EPA calculates the acceptable human exposure by dividing the no observed adverse effect level (NOAEL) in animals by 1000.[22] Like pharmaceutical companies, pesticide companies have financial incentives to generate data favorable to their products, and they could employ any of the strategies for biasing research discussed in Section 3 of this article. To comply with occupational health standards set by OSHA, a company would have an incentive to present data that supports the hypothesis that its workplace does not require any major changes to protect the health of its workers, since major changes in the workplace could the company a great deal of money.

How should society deal with COIs related to regulation? Disclosure is the first option for dealing with COIs. Researchers who receive money from companies should disclose their funding sources and any financial interests (such as stocks) related to the research. They should make these disclosure to regulatory agencies, scientific journals (if they submit their work for publication), and other interested parties. Researchers should also disclose everything that an independent expert would need to evaluate their work, including their research design, protocol, research methods, data management procedures, and assumptions made for the purposes of data analysis and interpretation. Full disclosure is important so that independent experts can determine whether the research meets appropriate scientific and ethical standards. One cannot know whether a study is flawed if one does not have access to all the information needed to produce the results of the study. Even if the research is not published in a peer reviewed journal, it should still be subjected to rigorous standards of proof. To encourage full disclosure, many biomedical journals have begun to require sponsors of clinical trials to submit information about their trials to a publicly accessible database, or clinical trial registry.[23]

Full disclosure will help to address biases related to research design, data collection, analysis, and interpretation, but what about the funding imbalance? If all of the research related to a particular product is funded by the company that makes the product, biases may still occur. One way to deal with this problem is to use government funding to help even the playing field. Government agencies, such as the NIH, NSF, EPA, and USDA should fund research on products or activities that are regulated by the government, such as pharmaceuticals, medical devices, foods, and pesticides. Currently, the NIH uses a small portion of its budget on clinical trials that are designed to compare the safety and efficacy of different treatments that are on the market. The NIH and the EPA sponsor research on the effects of pesticides and other chemicals on human health. Society should ensure that these and other agencies have enough funding to provide independent assessments of products. Since private industry currently funds more than 60% of all research and development (R& D), it is unlikely that the government will be able to match industry’s contribution. However, it is still important generate some data to offset industry data. Having 20% of studies on a particular drug funded by a source other than the manufacturer of the drug is better than having 0% of the studies sponsored by a source other than the manufacturer.[24]

Independent review and funding balancing are two strategies that society can use to manage COIs related to scientific research for regulatory purposes, but are there some COIs that are so egregious that they should be prohibited? Relationships that society should consider prohibiting are financial ties between private companies and the agencies that regulate those companies. These ties include individual COIs and institutional COIs. Although many different federal agencies have had financial connections to the companies they regulate, this article will focus on the FDA’s relationships with industry. The FDA uses review panels to provide advice on whether the organization should approve a new drug, biologic, or medical device. These panels include scientists from within the FDA and from outside the FDA. In the fall of 2000, an investigation into the FDA’s approval process found that at least one committee member had a financial interest directly related to the topic being reviewed in 146 out of 159 meetings of the meetings, and that at least half of the committee members had financial interests directly related to the topic of discussion at 88 meetings. Institutional COIs are also a problem at the FDA. The Prescription Drug User Fee Act (PDUFA) allows the FDA to charge fees to companies for reviewing new products. In 2002, fees generated from companies accounted for 14% of the FDA’s $1 billion budget. Prior to the adoption of the PDUFA in 1992, all of the FDA’ funds came from the federal government.[25]

Although the FDA has rules that requires the disclosure of individual COIs, and its institutional COIs are in the public record, one might argue that these rules are not enough to deal with this problem. All financial ties between the FDA and the companies that it regulates should be prohibited, because any money that the agency receives from these companies could have a significant impact on its decision-making and it is very difficult to manage these COIs. Although the FDA need not have anti-industry bias, it should not have a pro-industry bias. To protect public health, the FDA should be fair and impartial. Society should also consider banning financial ties between other regulatory agencies, such as the EPA, USDA, OSHA, and DOE, and the companies that they regulate.[26]

One might object that banning these relationships would have a negative impact on the agency by depriving it of valuable resources and funding. Many of the experts in various fields in biomedicine have some financial ties to industry. These experts probably would not participate on FDA panels if they were required to sever their relationships with private companies. Moreover, the FDA could lose a large percentage of its budget if it stopped collecting fees from companies. If the federal government does not replace this money, this loss would have an adverse impact on the agency’s ability to protect the public’s health. While I recognize the importance of taking these practical considerations into account in formulating social policy, it should be possible to recruit experts with minimal ties to industry, especially minimal ties to the very company they are asked to evaluate. While it would be harsh to take away 14% of the FDA’s budget in one year, it should also be possible to gradually wean the FDA from its dependence on fees from companies.

Having taken a brief look at COIs in scientific research related to regulation, we can now consider COIs in scientific research related to litigation. To understand these COIs, it is important to distinguish between two types of research: 1) research conducted while litigation is underway, and 2) research conducted in anticipation of future litigation.[27] For example, suppose that a company manufactures an artificial heart valve and several hundred customers file a class action lawsuit against the company claiming that they have had strokes as a result of using its phone. When the litigation begins, the defense and the plaintiffs could both hire scientists to conduct research on the safety of the valve. Or the company could hire experts to conduct research on the safety of its product before any litigation begins, in anticipation of potential future litigation. Although some commentators, such as Judge Kozinski in Daubert v. Merrell-Dow Pharmaceuticals have maintained that research prepared while litigation is underway is less trustworthy than research prepared prior to litigation, COIs can arise in both contexts, if scientists are receiving money that has a significant impact on how they conduct research or provide testimony.[28]

COIs related to research conducted in response to or anticipation of litigation can bias scientific judgment and decision-making in at least two ways. First, since researchers will receive their money from one side of the litigation, they may feel that they have an obligation to help that side win its case, or they may feel more sympathetic to that side.[29] Second, researchers may be interested in receiving funding from other litigants in the future, which might depend, in part, on how they perform in the current litigation. For example, if a researcher develops a reputation for providing evidence that a particular type of product is safe, he or she may be able to be paid (in the future) by manufacturers of that product to help defend them against lawsuits. Conversely, if a researcher develops a reputation for providing evidence that a type of product is unsafe, he or she may be able to receive money from the people who are suing the manufacturers of the product.

How should society deal with COIs related to litigation? Once again, disclosure is the first and most important option. Financial relationships between parties and experts should be disclosed, including funding for research projects and fees for consultation and testimony. Additionally, experts should disclose all the information that an independent scientist would need to evaluate their research and testimony, including their research design, methods, data, and analyses. U.S.’s and other legal systems already allow parties to discover this information through cross-examination of witnesses.[30] Attorneys should take advantage of their right to cross-examine witness and uncover all that needs to be disclosed, so that they critique the arguments and assumptions of opposing experts. Ultimately, the trier of fact—the jury or a judge—will be able to weigh these different opinions in deciding the outcome of the case.

Are there any financial relationships that would be so problematic that they should be prohibited? The most egregious types of relationships would be when compensation is based on generating specific research results, providing a specific type of testimony, or is contingent on the outcome of the case. These arrangements would have very significant effects on scientists and would be difficult to manage. Since it is virtually impossible to know how an experiment will turn out before it happens, quid-pro-quo arrangements would almost compel scientists to manipulate research design, data, or analysis to satisfy the demands of clients. Fortunately, these quid-pro-quo forms of compensation would violate the American Bar Association’s model rules and might be considered bribery in some circumstances.[31] All countries and jurisdictions should prohibit quid-pro-quo compensation for experts. Other arrangements, such as payments based on time and effort are not as problematic as quid-pro-quo arrangements, since they do not link compensation to any specific research result, testimony, or legal outcome.

6. Conclusion

Conflicts of interests are a common ethical concern in scientific research, especially research related to regulation or litigation. Since millions of dollars may hinge on the outcome of a regulatory decision or a lawsuit, financial pressures can have a significant impact on scientific decision-making and judgment. To minimize the impact of COIs in regulation or litigation, all aspects of research, including experimental design, data, methods, and financial interests, should be disclosed to the relevant parties. Disclosure can overcome potential biases by providing independent experts with the information they require to evaluate the research. When disclosure does not sufficiently address the ethical concerns, stricter measures, such as conflict management or prohibition, may be appropriate. Government agencies and the courts should take appropriate steps to protect legal processes from adverse impacts of COIs related to scientific research. To maximize fairness and impartiality in regulatory decision-making, there should be no financial ties between regulatory agencies and their employees and the companies they regulate. In litigation, there should be no quid-pro-quo financial arrangements between researchers and the litigating parties.

Acknowledgments

This research was supported by the intramural program of the National Institute for Environmental Health Science, National Institutes of Health. It does not represent the views of the National Institute for Environmental Health Sciences or the National Institutes of Health. An earlier version of the manuscript was presented at the Coronado Conference III: Truth and Advocacy: The Quality and Nature of Litigation and Regulatory Science, San Diego, CA, March 10, 2006

Notes

1. Resnik D. The Price of Truth: How Money Affects the Norms of Science. New York: Oxford University Press; 2007. [Google Scholar]

2. Boden L, Ozonoff D. Litigation-Generated Science: Why Should We Care?. Paper presented at the Coronado Conference III: Truth and Advocacy: The Quality and Nature of Litigation and Regulatory Science; San Diego, CA. March 9, 2006. [Google Scholar]

3. Risinger D, Saks M. Criminal science: litigation-driven research in the criminal justice system. Paper presented at the Coronado Conference III: Truth and Advocacy: The Quality and Nature of Litigation and Regulatory Science; San Diego, CA. March 10, 2006. [Google Scholar]

4. Shamoo A, Resnik D. Responsible Conduct of Research. New York: Oxford University Press; 2003. [Google Scholar]

5. Rudner R. The scientists qua scientist makes value judgments. Philosophy of Science. 1953;20:1–6. [Google Scholar]

6. One might object that it is impossible to estimate the probability that a COI will cause a researcher to violate his or her legal or ethical duties because there are no relevant data pertaining to this estimation. Granted, it may not possible to calculate probabilities based on the observed frequencies of researchers violating their duties as a result of COIs, but one can still use Bayesian methods to calculate probabilities. On the Bayesian (or subjective) approach to probability, a probability is an educated guess, in light of the best available evidence. See Howson C, Urbach P. Scientific Reasoning: The Bayesian Approach. 2. New York: Open Court; 1993. [Google Scholar]

7. Association of American Medical Colleges (AAMC) Guidelines for dealing with conflicts of commitment and conflicts of interest in research. Academic Medicine. 1990;65:491. [PubMed] [Google Scholar]

8. Resnik D, Shamoo A. Conflicts of interest and the university. Accountability in Research. 2002;9:45–64. [PubMed] [Google Scholar]

9. Krimsky S. Science in the Private Interest. Lanham, MD: Rowman and Littlefield; 2003. [Google Scholar]

10. Resnik, Note 1.

11. Goozner M. The $800 Million Pill. Berkeley, CA: University of California Press; 2004. [Google Scholar]

12. Resnik, Note 1.

13. Shamoo and Resnik, Note 4.

14. Resnik, Note 1.

15. Dickersin K, Rennie D. Registering clinical trials. Journal of the American Medical Association. 2003;290:516–23. [PubMed] [Google Scholar]

16. Resnik, Note 1.

17. De Angelis C. Conflict of interest and the public trust. Journal of the American Medical Association. 2000;284:2237–38. [PubMed] [Google Scholar]

18. Shamoo and Resnik, Note 4.

19. Morin K, Rakatansky H, Riddick F, Morse L, O’Bannon J, Goldrich M, Ray P, Weiss M, Sade R, Spillman M. Managing conflicts of interest in the conduct of clinical trials. Journal of the American Medical Association. 2002;287:78–84. [PubMed] [Google Scholar]

20. Resnik, Note 1.

21. Henry C, Conrad J. Scientific and legal perspectives on the use of science in regulatory activities. Paper presented at the Coronado Conference III: Truth and Advocacy: The Quality and Nature of Litigation and Regulatory Science; San Diego, CA. March 10, 2006.2006. [Google Scholar]

22. National Research Council. Intentional Human Dosing Studies for EPA Regulatory Purposes: Scientific and Ethical Issues. Washington, DC: National Academy Press; 2004. [Google Scholar]

23. De Angelis C, Drazen J, Frizelle F, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke J, Schroeder T, Sox H, Van Der Weyden M. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. New England Journal of Medicine. 2004;351:1250–51. [PubMed] [Google Scholar]

24. Resnik, Note 1.

25. Krimsky, Note 9.

26. It is worth noting that the International Agency for Research on Cancer (IARC), which evaluates the carcinogenicity of industrial chemicals for the World Health Organization, adopted strict COI guidelines in 2004 in response to growing concerns about industry influence over its research and deliberations. See Cogliano V, Baan R, Straif K, Grosse Y, Secretan M, Ghissassi F, Kleihues P. Environmental Health Perspectives. Vol. 112. 2004. The science and practice of carcinogen evaluation; pp. 1269–74. [PMC free article] [PubMed] [Google Scholar]

27. One might also distinguish between research developed for civil litigation and research developed for criminal litigation. This article will examine COIs in civil litigation not criminal litigation, but it will note that forensic scientists can have COIs resulting from their close financial and personal connections to the law enforcement system. For further discussion, see Boden and Ozonoff, Note 2; Risinger and Sakes, Note 3.

28. Daubert v. Merrell-Dow Pharmaceuticals. 43 F. 3d 1311 at 1317 (9th cir. 1994).

29. Kipnis K. Confessions of an expert ethics witness. The Journal of Medicine and Philosophy. 1997;22:325–343. [PubMed] [Google Scholar]

30. Resnik D. Punishing medical experts for unethical testimony: a step in the right direction or a step too far? Journal of Philosophy, Science, and Law. 2004. [Accessed: March 30, 2007]. Available at: www.psljournal.com/archives/all/punishing.cfm.

31. Resnik, Note 31.

What part of a research paper explains the results of the research?

Definition. The results section is where you report the findings of your study based upon the methodology [or methodologies] you applied to gather information. The results section should state the findings of the research arranged in a logical sequence without bias or interpretation.

Which part of the research paper that elaborates the findings?

The conclusion offers you the opportunity to elaborate on the impact and significance of your findings. This is particularly important if your study approached examining the research problem from an unusual or innovative perspective. Introducing possible new or expanded ways of thinking about the research problem.

What part of the research paper is considered as the final and most important part of the research process?

Conclusion. As you conclude your research paper, you should succinctly reiterate your thesis statement along with your methodology and analyzed data – by drawing all these elements together you will reach the purpose of your research, so all that is left is to point out your conclusions in a clear manner.

What is the purpose of the theoretical implications in your conclusion part of the research study?

Answer: Research implications suggest how the findings may be important for policy, practice, theory, and subsequent research. Research implications are basically the conclusions that you draw from your results and explain how the findings may be important for policy, practice, or theory.