If you having been following our posts on the engineering reports on the Hurricane Sandy claims, you know that homeowners had coverage wrongfully denied and both FEMA and the courts are calling for transparency and an evaluation of draft engineering reports.
Check out our prior posts
Looking back at our blog before, we also posted Prevent Insurance Defense Counsel From Presenting Junk Science To The Jury. I was discussing with a group of attorneys how often we have to challenge the expert opinions presented by the insurance companies—often in deposition, and also to the courts.
In my research (thanks, Fred Cunningham), I found a great source published for free called the Reference Manual on Scientific Evidence, published by the National Academies Press in Washington, D.C. by the Committee on Science, Technology, and Law and Policy and Global Affairs.
You can read the entire manual here.
In a chapter by David Goodstein, How Science Works, the myth & fact excerpt below can help give some perspective on “scientific reports” and give guidance to prevent the presentation of junk science. Understanding these facts will help policyholders understand how a scientist or engineer hired by the insurance company can provide such off-base and contrary opinions. Maybe your claim doesn’t have the same problems that are coming up in the Sandy flood losses but consider these facts nonetheless.
Myth: Scientists must have open minds, being ready to discard old ideas in favor of new ones.
Fact: Because science is an adversarial process through which each idea deserves the most vigorous possible defense, it is useful for the suc-cessful progress of science that scientists tenaciously cling to their own ideas, even in the face of contrary evidence.
Myth: The institution of peer review assures that all published papers are sound and dependable.
Fact: Peer review generally will catch something that is completely out of step with majority thinking at the time, but it is practically useless for catching outright fraud, and it is not very good at dealing with truly novel ideas. Peer review mostly assures that all papers follow the cur-rent paradigm (see comments on Kuhn, above). It certainly does not ensure that the work has been fully vetted in terms of the data analysis and the proper application of research methods.
Myth: Science must be an open book. For example, every new experiment must be described so completely that any other scientist can reproduce it.
Fact: There is a very large component of skill in making cutting-edge experiments work. Often, the only way to import a new technique into a laboratory is to hire someone (usually a postdoctoral fellow) who has already made it work elsewhere. Nonetheless, scientists have a solemn responsibility to describe the methods they use as fully and accurately as possible. And, eventually, the skill will be acquired by enough people to make the new technique commonplace.
Myth: When a new theory comes along, the scientist’s duty is to falsify it.
Fact: When a new theory comes along, the scientist’s instinct is to verify it. When a theory is new, the effect of a decisive experiment that shows it to be wrong is that both the theory and the experiment are in most cases quickly forgotten. This result leads to no progress for anybody in the reward system. Only when a theory is well established and widely accepted does it pay off to prove that it is wrong.
Myth: University-based research is pure and free of conflicts of interest.
Fact: The Bayh-Dole Act of the early 1980s permits universities to patent the
results of research supported by the federal government. Many universities have become adept at obtaining such patents. In many cases this raises conflict-of-interest problems when the universities’ interest in pursuing knowledge comes into conflict with its need for revenue. This is an area that has generated considerable scrutiny. For instance, the recent Institute of Medicine report Conflict of Interest in Medical Research, Education, and Practice sheds light on the changing dimensions of conflicts of interest associated with growing interdisciplinary col¬laborations between individuals, universities, and industry especially in life sciences and biomedical research.
Myth: Real science is easily distinguished from pseudoscience.
Fact: This is what philosophers call the problem of demarcation: One of Popper’s principal motives in proposing his standard of falsifiability was precisely to provide a means of demarcation between real science and impostors. For example, Einstein’s general theory of relativity (with which Popper was deeply impressed) made clear predictions that could certainly be falsified if they were not correct. In contrast, Freud’s theories of psychoanalysis (with which Popper was far less impressed) could never be proven wrong. Thus, to Popper, relativity was science but psychoanalysis was not.
Real scientists do not behave as Popper says they should, and there is another problem with Popper’s criterion (or indeed any other criterion) for demarcation: Would-be scientists read books too. If it becomes widely accepted (and to some extent it has) that falsifiable predictions are the signature of real science, then pretenders to the throne of science will make falsifiable predictions too. There is no simple, mechanical criterion for distinguishing real science from some¬thing that is not real science. That certainly does not mean, however, that the job cannot be done. As I discuss below, the Supreme Court, in the Daubert decision, has made a respectable stab at showing how to do it.