Last Week Tonight with John Oliver: Scientific Studies

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
You're missing my point. You're mentioning the classic "scientific method" model where basically everything is being done by forming a hypothesis and then using prospective experimentation and controls such as double-blind testing to test the hypothesis and verify causality and that it's repeatable. If instead we pretty much exclusively use retrospective tools like data mining and predictive analytics then doesn't that flip the entire foundation of the scientific method as there's no inherent need to establish a hypothesis or causality to get useful results. Indeed, the entire concept of "science" may bifurcate like what happened to neurology and psychology where one primarily concerns itself with the physical brain and nervous system and the other with the more intangible and philosophical "mind" and mental health.

The scientific method does not mandate that your experiments should be prospective or double blinded, nor does it mandate that you should be able to verify causality (in fact, causality most often cannot be verified, only made probable). Replication is of course mandatory.

All of the above things are of course desirable (where applicable), since they add to the power of your study, but they are not mandatory to the scientific method.
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,606
166
111
www.slatebrookfarm.com
I would add to this that published studies are often behind journals' pay-to-read firewalls, and it's impossible for the lay public to get access for a reasonable fee.

What I would like to see is the major media collaborating on funding a site the provided more comprehensive analyses of important studies, addressed to laymen. That way, a newspaper could still provide its own short write-up and provide a link to much more in-depth articles.
Worse yet are some of the excessive fees to read these journals. If I recall correctly, a lot of these journals have been gobbled up by larger companies. They're making fistfuls of dollars in profits - no one gets paid for the journal articles, and no one gets paid for the peer review. That's a system that needs to be fixed - free market should not apply; they're really just taking advantage of academia.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,425
8,388
126
Nothing will change in light of this though is the sad part.

The scientific method has changed. Instead of making an assertion, hypothesizing, testing, and analyzing your data we just take a statistical study that is DEFINITELY not bias for interested parties - then draw some random conclusion that doesn't even correlate with any REAL scientific testing.

It's sad to say, but I don't know when this will stop. It's much like the media - people will do things that get attention rather than do important things.

there's a lot of people doing studies who are bad at statistics. at least, that's what a clinical statistician tells me.
 

WHAMPOM

Diamond Member
Feb 28, 2006
7,628
183
106
Seventy years ago we had a hypothesis that predicted gravity waves, this year we detected them. That is a damned long term experiment using the scientific method.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Worse yet are some of the excessive fees to read these journals. If I recall correctly, a lot of these journals have been gobbled up by larger companies. They're making fistfuls of dollars in profits - no one gets paid for the journal articles, and no one gets paid for the peer review. That's a system that needs to be fixed - free market should not apply; they're really just taking advantage of academia.

If I remember correctly over 50% of all papers are currently published by just 5 companies (Elsevier, Wiley and Springer being the biggest three of those 5).

Also it's not entirely correct to say that no one gets paid for the journal articles, since the journal itself takes a fee for publishing (usually several thousand dollars), so the publishing companies get paid.

For what it's worth though, the fix for this is already out there in the form of open source journals like PLOS ONE.

there's a lot of people doing studies who are bad at statistics. at least, that's what a clinical statistician tells me.

I would go so far as to say that it is actually the majority of people doing scientific studies who are bad at statistics (or at the very least only have a rudimentary understanding of it).

This is why most larger research groups usually have access to dedicated statisticians.
 
Last edited:

Paratus

Lifer
Jun 4, 2004
16,848
13,784
146
Without wanting to seem flip, should the scientific method change though? In the era of big data, massive computing power, and current scientific theories that not only defy sound logic but the means to test them (see most of the field of quantum physics, concepts like "dark energy," or even actual observable things like this new space drive), does it still make sense to go through the entire course of the scientific method? Seems kinda pointless to develop a hypothesis and test it when for example one can just consult the entire CDC database and determine which treatment worked best from the millions of case studies on file and just use that.

Well feel free to test your hypothesis experimentally, draw conclusions, refine your hypothesis and publish your theory. If it's repeatable and shows improvement over the current scientific method then the scientific community will gladly embrace it.
 

glenn1

Lifer
Sep 6, 2000
25,383
1,013
126
Well feel free to test your hypothesis experimentally, draw conclusions, refine your hypothesis and publish your theory. If it's repeatable and shows improvement over the current scientific method then the scientific community will gladly embrace it.

Again, why do you even need a hypothesis? If you had access to millions of (sanitized for privacy) patients' medical records and genomic records, you could provide the proper medical treatment for someone without knowing how it worked or why by essentially "brute forcing" the correct course of action out of the countless permutations previously tried in previous patients. Hell, needing to develop a hypothesis for the causal mechanism for how a particular outcome worked might actually slow down or impede scientific progress in the future since understanding the "why" of something might become the least important part of science.
 

zinfamous

No Lifer
Jul 12, 2006
110,819
29,571
146
Nothing will change in light of this though is the sad part.

The scientific method has changed. Instead of making an assertion, hypothesizing, testing, and analyzing your data we just take a statistical study that is DEFINITELY not bias for interested parties - then draw some random conclusion that doesn't even correlate with any REAL scientific testing.

It's sad to say, but I don't know when this will stop. It's much like the media - people will do things that get attention rather than do important things.

no.

The case here involves hijacking studies to advertise them for things that are never claimed. For making rash public statements that might even be dangerous, based on nothing.

In some cases, you do have fields of "semi-science" masquerading as real science and often relayed to the public in a much higher degree of validity than it deserves. Behavioral science, for one, which is where you will probably find the greatest rate of p-hacking. Not to discount the entire field, but you really have to look at studies directed by salespeople interested in selling product, based on easily-manipulated survey data with a salt-mine's worth of confidence.
 

Paratus

Lifer
Jun 4, 2004
16,848
13,784
146
Again, why do you even need a hypothesis? If you had access to millions of (sanitized for privacy) patients' medical records and genomic records, you could provide the proper medical treatment for someone without knowing how it worked or why by essentially "brute forcing" the correct course of action out of the countless permutations previously tried in previous patients. Hell, needing to develop a hypothesis for the causal mechanism for how a particular outcome worked might actually slow down or impede scientific progress in the future since understanding the "why" of something might become the least important part of science.

A couple of great hypotheses for testing!

  • H1: In the presence of large detailed medical records brute forced statistical analysis may yield quicker novel treatments than standard approaches
  • H2: In the presence of large numbers of detailed records statistical analysis may result in finding causal linkages faster than hypothesizing and testing for causal linkages

If it's not obvious it's very difficult to propose an idea about how something in the world may work and it NOT be a hypothesis. The hard part is determining an appropriate test to adequately test the hypothesis.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Again, why do you even need a hypothesis? If you had access to millions of (sanitized for privacy) patients' medical records and genomic records, you could provide the proper medical treatment for someone without knowing how it worked or why by essentially "brute forcing" the correct course of action out of the countless permutations previously tried in previous patients. Hell, needing to develop a hypothesis for the causal mechanism for how a particular outcome worked might actually slow down or impede scientific progress in the future since understanding the "why" of something might become the least important part of science.

If you develop a method for "brute forcing" a treatment out of the available data, then using said method wouldn't be science as such, that would simply be part of a given treatment regime. Determining that this method is even possible in the first place is the science part, and this is obviously where the hypothesis testing comes into play.

And again the scientific method does not mandate finding or validating the causative mechanism, and as such it isn't necessary to develop a hypothesis for it (although it is quite often very informative to at least try).
 

zinfamous

No Lifer
Jul 12, 2006
110,819
29,571
146
Again, why do you even need a hypothesis? If you had access to millions of (sanitized for privacy) patients' medical records and genomic records, you could provide the proper medical treatment for someone without knowing how it worked or why by essentially "brute forcing" the correct course of action out of the countless permutations previously tried in previous patients. Hell, needing to develop a hypothesis for the causal mechanism for how a particular outcome worked might actually slow down or impede scientific progress in the future since understanding the "why" of something might become the least important part of science.

And why do you assume this is actually the case? More than one person has already informed you that studies proceed in various ways--and many times essentially as you suggested. A hypothesis isn't always an explicit question. It is sometimes a broad target that points towards a region (in a local area of the genome, for example), that might be of particular interest to that specific research group.

NCBI has open resources of thousands and thousands of genomes where anyone can search and find hits for their particular region of interest. If their investigations lead them to certain hits, they might begin targeting those hits with various tools that have been designed and used for the last couple of decades--say knockouts or knockdowns--shutting off the function of that gene to see what happens.

No one that I know wastes their time trying to come with a "why" before they find a significant effect. That is often the far more complex question and once you begin down that path, it can lead towards many branches and perhaps several dead-ends that end up being irrelevant to that particular project. Nevertheless, data tends to get published in that process and if some research group somewhere else finds tangential relevance that never applied to the first research group, you might end up with a different class of "whys" that are now more useful in a different field.

You assume this is a process that isn't already in play but that if it were, we could somehow develop medicine for humans by cutting out some perceived bullshit in the process. First, that is less of an issue with the progress of science than it is on standards and ethics that are necessarily in place to prevent open experimentation on human subjects. In some cases, I do wish we could take more risks in this way (it's not like Pasteur ever gave a fuck about that), but it's a very different world of understanding now and the comparatively simple mechanics of how vaccines work next to genomic tampering--more and more of a minefield of unknown consequences the more we learn--medicine simply can't work that way.

I think the public is most often confused about how slow the realities of advancement are because they are lead to believe through popular entertainment that this is a quick process that takes one singular brilliant mind a weeks-long montage in a poorly-lit-yet-highly-stylish gun-metal colored lab to unlock all the mysteries of an impossible disease and perfect the perfect pill.

That simply does not happen. The most effective cures that we are aiming for these days--direct genetic targeting, either as a replacement/blocking or as a specific target for medicine delivery--are never going to be a key + slot = open door approach. It's more like key+slot = dozen open doors, 3 of which might be promising, but also some scattered hidden booby traps like spike and flame traps in that same room.

Removing those booby traps and eliminating the false doors within the genome (which is more accurately an issue with the transcriptome, the epigenome, and hosts of siRNAs, picoRNAs, blah blah blah) takes a very long time.

It's frustrating, too. Biology simply does not happen on paper. Living systems behave quite differently from non-living force-force interactions that are almost completely driven by pure math and, well, can and do happen on paper.
 

interchange

Diamond Member
Oct 10, 1999
8,022
2,872
136
A couple of great hypotheses for testing!

  • H1: In the presence of large detailed medical records brute forced statistical analysis may yield quicker novel treatments than standard approaches
  • H2: In the presence of large numbers of detailed records statistical analysis may result in finding causal linkages faster than hypothesizing and testing for causal linkages

If it's not obvious it's very difficult to propose an idea about how something in the world may work and it NOT be a hypothesis. The hard part is determining an appropriate test to adequately test the hypothesis.

Study design is hard, but perhaps not the barrier you think. The better your study design, the more $$$ it costs. Huge barrier.

RE: your approach, there are countries with socialized medicine that have been collecting data for years, and they generate hypotheses and test them with cross-sectional analyses and cohort studies (usually retrospective). They are useful, but they are near the bottom of pyramid for validity, because they are subject to all sorts of biases that you can't control for.

If you were to "brute force it", by having a computer generate random hypotheses and test them, you would have very little validity. What we do is we fit our data to a model (e.g. normal distribution of outcome), and test the likelihood that we achieved this result by chance. For rejecting that the data was achieved through random chance (alpha), we generate a p-value. Usually we use a p value of .05 for our primary outcome of a study. This means that, if we find a difference, we consider it statistically significant if there is a less than 5% chance that, if the outcome was random according to our given model, then we would achieve this result.

So...95% chance of the result being real, in laymans terms.

So basically if you do what you suggest, then what happens if you test 1000 random hypotheses? 50 of them will find a result that you believe to be true merely by chance.

Not saying that this approach is useless. There are various ways to adjust your P-value when running multiple corrections. Unfortunately, this means that likely such a difference would have to have to be fairly large to believe a result, and you'd get few hits from doing that.

But, that is not the value to your approach that we see. The value to your approach is for generating hypothesis, that we can then design a study, preferably a randomized controlled trial, to test that hypothesis. And then we should repeat that study. And then we should repeat it again. And then we should perform a systematic review and meta-analysis. And then, maybe, we can be pretty sure we know what we're talking about.
 
Last edited:

Paratus

Lifer
Jun 4, 2004
16,848
13,784
146
Study design is hard, but perhaps not the barrier you think. The better your study design, the more $$$ it costs. Huge barrier.

RE: your approach, there are countries with socialized medicine that have been collecting data for years, and they generate hypotheses and test them with cross-sectional analyses and cohort studies (usually retrospective). They are useful, but they are near the bottom of pyramid for validity, because they are subject to all sorts of biases that you can't control for.

If you were to "brute force it", by having a computer generate random hypotheses and test them, you would have very little validity. What we do is we fit our data to a model (e.g. normal distribution of outcome), and test the likelihood that we achieved this result by chance. For rejecting that the data was achieved through random chance (alpha), we generate a p-value. Usually we use a p value of .05 for our primary outcome of a study. This means that, if we find a difference, we consider it statistically significant if there is a less than 5% chance that, if the outcome was random according to our given model, then we would achieve this result.

So...95% chance of the result being real, in laymans terms.

So basically if you do what you suggest, then what happens if you test 1000 random hypotheses? 50 of them will find a result that you believe to be true merely by chance.

Not saying that this approach is useless. There are various ways to adjust your P-value when running multiple corrections. Unfortunately, this means that likely such a difference would have to have to be fairly large to believe a result, and you'd get few hits from doing that.

But, that is not the value to your approach that we see. The value to your approach is for generating hypothesis, that we can then design a study, preferably a randomized controlled trial, to test that hypothesis. And then we should repeat that study. And then we should repeat it again. And then we should perform a systematic review and meta-analysis. And then, maybe, we can be pretty sure we know what we're talking about.

Well it's not really my hypothesis. In a round about way I was trying to show Glenn what he was proposing doesn't actually remove hypothesizing and that he was basically hypothesizing that removing hypothesizing would speed up science.
 

interchange

Diamond Member
Oct 10, 1999
8,022
2,872
136
Well it's not really my hypothesis. In a round about way I was trying to show Glenn what he was proposing doesn't actually remove hypothesizing and that he was basically hypothesizing that removing hypothesizing would speed up science.



Yeah. Anyway, you should be exceedingly skeptical of the quality of studies that report on multiple outcomes. If you keep pulling cards out of a deck at random, you'll pull the Ace of Spades once in a while.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
So basically if you do what you suggest, then what happens if you test 1000 random hypotheses? 50 of them will find a result that you believe to be true merely by chance.

No, 50 of them will appear to be true (significant) by chance, since anyone doing studies like this would now how to correct for multiple comparisons.

I don't know why you would assume that people would use faulty statistical methods for studies like this (I know that I said earlier, that the majority of scientist only have a rudimentary understanding of statistics, but correcting for multiple comparisons falls within the realm of rudimentary imho).
 

interchange

Diamond Member
Oct 10, 1999
8,022
2,872
136
No, 50 of them will appear to be true (significant) by chance, since anyone doing studies like this would now how to correct for multiple comparisons.

I don't know why you would assume that people would use faulty statistical methods for studies like this (I know that I said earlier, that the majority of scientist only have a rudimentary understanding of statistics, but correcting for multiple comparisons falls within the realm of rudimentary imho).

If you read my post, I'm explaining to a non-scientist why you need to correct for multiple comparisons.

And while what you say is true of good epidemiologic studies and things like GWAS, etc., which often have large collections of data and very large number of comparisons, it does not necessarily hold true for other types of studies. For example, most clinical trials publish data on a handful of secondary outcomes, and only some of theme correct for multiple comparisons. While it is true that such a problem is obvious to any researcher reading the literature, and such data should only be interpreted as a hypothesis for future study, this thread is not about that. This thread is about the media.

And the media will report on anything they find to be of interest to the public. So they will pull from crappy journals, and even ones that aren't peer reviewed and are essentially pay-for-publish, and they will not know of the need for correction for multiple comparisons, and I've even seen some fishy headlines where they reported that a hypothesis was confirmed (for example by reporting that intervention had success in 87% of patients, even leaving out 74% placebo response, and certainly not understanding a p of .37) when in fact it very much was not.
 

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
Well it's not really my hypothesis. In a round about way I was trying to show Glenn what he was proposing doesn't actually remove hypothesizing and that he was basically hypothesizing that removing hypothesizing would speed up science.

Its kind of funny what he is proposing. Its like he wants people to randomly do things until something happens. This actually happens all the time. Accidental discovery is a corner stone of science. But guess what, once they accidentally discover something they then... set up a hypothesis, test it, and draw conclusions to definitively prove what they stumbled into...

If you are looking for anything then you've got a hypothesis :thumbsup:

Its like saying "I'm going to turn the problem solving world upside down! By not identifying a problem!"
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |