
according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.
p of 0.047654 means the probability that the observed effect is due to chance is 4.8%
p of 0.1234 means that probability of the observed effect is due to chance is 12.3%
jmh:
freitasm:
Damn, even publication sometimes means nothing. The vaccine  autism study was published on The Lancet and then retracted when they found out the author was trying to hawk his own "non vaccine vaccine".
His patent was for a measles vaccine  at that time he wasn't antivaccine, although he might be now.
From https://en.wikipedia.org/wiki/MMR_vaccine_controversy
The MMR vaccine controversy started with the 1998 publication of a fraudulent research paper in the medical journal The Lancet that lent support to the later discredited claim that colitis and autism spectrum disorders are linked to the combined measles, mumps, and rubella (MMR) vaccine.^{[1]} Aspects of the media coverage were criticized for naïve reporting and lending undue credibility to the architect of the fraud, Andrew Wakefield.
Investigations by Sunday Times journalist Brian Deer reported that Andrew Wakefield, the author of the original research paper, had multiple undeclared conflicts of interest,^{[2]}^{[3]} had manipulated evidence,^{[4]} and had broken other ethical codes. The Lancet paper was partially retracted in 2004, and fully retracted in 2010, when The Lancet's editorinchief Richard Horton described it as "utterly false" and said that the journal had been "deceived."^{[5]} Wakefield was found guilty by the General Medical Council of serious professional misconduct in May 2010 and was struck off theMedical Register, meaning he could no longer practice as a doctor in the UK.^{[6]} In 2011, Deer provided further information on Wakefield's improper research practices to the British medical journal, BMJ, which in a signed editorial described the original paper as fraudulent.^{[7]}^{[8]} The scientific consensus is the MMR vaccine has no link to the development of autism, and that this vaccine's benefits greatly outweigh its risks.
Following the initial claims in 1998, multiple large epidemiological studies were undertaken. Reviews of the evidence by the Centers for Disease Control and Prevention,^{[9]} the American Academy of Pediatrics, the Institute of Medicine of the US National Academy of Sciences,^{[10]} the UKNational Health Service,^{[11]} and the Cochrane Library^{[12]} all found no link between the MMR vaccine and autism. While the Cochrane review expressed a need for improved design and reporting of safety outcomes in MMR vaccine studies, it concluded that the evidence of the safety and effectiveness of MMR in the prevention of diseases that still carry a heavy burden of morbidity and mortality justified its global use, and that the lack of confidence in the vaccine had damaged public health.^{[12]} A special court convened in the United States to review claims under the National Vaccine Injury Compensation Program rejected compensation claims from parents of autistic children.^{[13]}^{[14]}
The claims in Wakefield's 1998 The Lancet article were widely reported;^{[15]} vaccination rates in the UK and Ireland dropped sharply,^{[16]} which was followed by significantly increased incidence of measles and mumps, resulting in deaths and severe and permanent injuries.^{[17]} Physicians, medical journals, and editors^{[18]}^{[19]}^{[20]}^{[21]}^{[22]} have described Wakefield's actions as fraudulent and tied them to epidemics and deaths,^{[23]}^{[24]} and a 2011 journal article described the vaccine–autism connection as "perhaps the most damaging medical hoax of the last 100 years".^{[25]}
The first paragraph is important. As I said he had a conflict of interest because he was trying to prove the existing vaccine had problems, while at the same time trying to create a commercial environment for his own version of a vaccine without those.
These links are referral codes
Geekzone broadband switch  Eletricity comparison and switch  Hatch investment (NZ$ 10 bonus if NZ$100 deposited within 30 days)  Sharesies  Mighty Ape  Backblaze  Amazon  My technology disclosure
Involuntary autocorrect in operation on mobile device. Apologies in advance.
Rikkitic:
At the risk of hijacking this thread, I would be very interested to know what it is that makes ordinary people so suspicious of science yet so quick to embrace magic. I happen to have several friends who unfortunately fall into this category. They are kind, intelligent people, which is why they are friends, but they seem convinced that scientific research, including the research that has led to most of the home comforts they take for granted, is somehow tainted and part of a vast conspiracy. Yet they have no problem accepting homoeopathy, astrology, Tarot, and in one case, at least, even the existence of angels! Or to be correct, this friend, regrettably a Kiwi, seems to believe in someone who claims to talk to angels, who tell him things like global warming is just a myth so don't worry about it. Apparently he is very popular in the States and has made a ton of money writing books. I just don't get it.
Without the risk of derailing the thread, some people think of science as some weird black magic perhaps. You also can have good science, bad science, and black magic  as in theoretical physics. Eg there is a physics field called "our universe is part of a computer simulation  probably of some superior alien beings". I'm sure there are "scientific studies" on homeopathy  you just need to find it.
The trick is to separate good science from bad science, and how it applies to the real world.
For example, you can have a perfect study to show that cellphone radiation causes cancer in humans. But if and only if after 80 years of 20hrs a day of usage.
Involuntary autocorrect in operation on mobile device. Apologies in advance.
zenourn:
according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.
p of 0.047654 means the probability that the observed effect is due to chance is 4.8%
p of 0.1234 means that probability of the observed effect is due to chance is 12.3%
No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.
Same thing.
Edit: in fact you must be wrong. you only have one null hypothesis in a study but the study statistics can have hundreds of p values
Involuntary autocorrect in operation on mobile device. Apologies in advance.
zenourn:
according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.
p of 0.047654 means the probability that the observed effect is due to chance is 4.8%
p of 0.1234 means that probability of the observed effect is due to chance is 12.3%
No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.
I've read over the paper and the analysis is very flawed. If they stick with frequentist analysis at least need some form of multiple comparison correction which makes everything nonsignificant. Ideally need to fit a single Bayesian regression model with sensible priors, in which case all effects would also disappear (I.e., would get high probability that effect size is close to zero).
Results here are indistinguishable from noise. However, plenty of journals now that will publish this type of flawed research for a fee.
I'm glad that's all cleared up then.
Only one of the peer reviewers had that criticism (results indistinguishable from noise).
Speaking of noise, in appendix D is controls history data. It's for male rats only  why not the female rat historical control data as well? (the historical % incidence is used  but no breakdown is shown).
The male rat control data shows with both data sets there are two outliers, glioma etc (2% out of 10 studies / 550 rats) there's an 8% in one study (4/50). In schwannoma (1.3% out of 13 studies / 699 rats) there's a 6% (3/50). That in itself isn't perhaps so interesting  except the outlier results are from the same trial  commencing Feb 17 2011. I don't think anybody involved would suggest that environment, or genetics, or some unknown "something else" is not going on with the controls  and that's perhaps why there's no discussion, as it's just a generally accepted fact that there's a wide range of variance between trials. Take out the historical trial averages, and you've got a small trail with very significant looking results (control incidence is zero). Sure  it's a mess, but to not release the paper would have been a mistake IMO.
There are comments in the peer reviews about reliance on historical controls, suggesting larger studies are needed.
joker97:
zenourn:
according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.
p of 0.047654 means the probability that the observed effect is due to chance is 4.8%
p of 0.1234 means that probability of the observed effect is due to chance is 12.3%
No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.
Same thing.
Edit: in fact you must be wrong. you only have one null hypothesis in a study but the study statistics can have hundreds of p values
Definitely not the same thing! If stated as "p of 0.047654 means their is a 4.8% probability of observing by chance an effect that is equal or greater that seen in this particular sample" is getting better (although several technicalities still to consider)." The probability that the observed effect is due to chance is actually very close to 100% as if you repeat the procedure you're very unlikely to get the same observed effect.
Every single pvalue has an associated null hypothesis (in this case one example is "The difference in the rate of malignant glioma in male rats exposed to 1.5 W/kg GSMmodulated RFR compared to male rats not exposed to radiation is zero).
I'm a mathematician and work with frequentist and Bayesian statistical models in a medical area on a daily basis ;)
zenourn:
joker97:
zenourn:
according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.
p of 0.047654 means the probability that the observed effect is due to chance is 4.8%
p of 0.1234 means that probability of the observed effect is due to chance is 12.3%
No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.
Same thing.
Edit: in fact you must be wrong. you only have one null hypothesis in a study but the study statistics can have hundreds of p values
Definitely not the same thing! If stated as "p of 0.047654 means their is a 4.8% probability of observing by chance an effect that is equal or greater that seen in this particular sample" is getting better (although several technicalities still to consider)." The probability that the observed effect is due to chance is actually very close to 100% as if you repeat the procedure you're very unlikely to get the same observed effect.
Every single pvalue has an associated null hypothesis (in this case one example is "The difference in the rate of malignant glioma in male rats exposed to 1.5 W/kg GSMmodulated RFR compared to male rats not exposed to radiation is zero).
I'm a mathematician and work with frequentist and Bayesian statistical models in a medical area on a daily basis ;)
I guess each mathematician makes their own definition. In some cases, they follow the chance explanation like
https://practice.sph.umich.edu/micphp/epicentral/p_value.php
Involuntary autocorrect in operation on mobile device. Apologies in advance.
Fred99:
I'm glad that's all cleared up then.
Only one of the peer reviewers had that criticism (results indistinguishable from noise).
Speaking of noise, in appendix D is controls history data. It's for male rats only  why not the female rat historical control data as well? (the historical % incidence is used  but no breakdown is shown).
The male rat control data shows with both data sets there are two outliers, glioma etc (2% out of 10 studies / 550 rats) there's an 8% in one study (4/50). In schwannoma (1.3% out of 13 studies / 699 rats) there's a 6% (3/50). That in itself isn't perhaps so interesting  except the outlier results are from the same trial  commencing Feb 17 2011. I don't think anybody involved would suggest that environment, or genetics, or some unknown "something else" is not going on with the controls  and that's perhaps why there's no discussion, as it's just a generally accepted fact that there's a wide range of variance between trials. Take out the historical trial averages, and you've got a small trail with very significant looking results (control incidence is zero). Sure  it's a mess, but to not release the paper would have been a mistake IMO.
There are comments in the peer reviews about reliance on historical controls, suggesting larger studies are needed.
When it comes to statistics and peer review often it can be the blind leading the blind. I review so many papers were recommend to accept only after major revisions due to statistical issues while other reviewers make no comments about the stats.
There are 80 statistical comparisons in this paper and due to sampling variability, experimenter degrees of freedom, and issues with forking paths can expect numerous significant results (> 10) that are just noise. They haven't applied any correction for multiple comparisons as it would make everything nonsignificant and a positive result is much better for future grant success.
My view is that there is very likely an effect on cancers due to cellphone use (one possible cause is the very minor heating effect of the RF which will cause certain cellular processes to go slightly faster). However, this effect is likely to be very tiny and to accurately estimate the size likely need a sample size in the hundreds of thousands. Can almost guarantee that the risk from sunlight exposure on cancer is several orders of magnitude greater.
zenourn:
Fred99:
I'm glad that's all cleared up then.
Only one of the peer reviewers had that criticism (results indistinguishable from noise).
Speaking of noise, in appendix D is controls history data. It's for male rats only  why not the female rat historical control data as well? (the historical % incidence is used  but no breakdown is shown).
The male rat control data shows with both data sets there are two outliers, glioma etc (2% out of 10 studies / 550 rats) there's an 8% in one study (4/50). In schwannoma (1.3% out of 13 studies / 699 rats) there's a 6% (3/50). That in itself isn't perhaps so interesting  except the outlier results are from the same trial  commencing Feb 17 2011. I don't think anybody involved would suggest that environment, or genetics, or some unknown "something else" is not going on with the controls  and that's perhaps why there's no discussion, as it's just a generally accepted fact that there's a wide range of variance between trials. Take out the historical trial averages, and you've got a small trail with very significant looking results (control incidence is zero). Sure  it's a mess, but to not release the paper would have been a mistake IMO.
There are comments in the peer reviews about reliance on historical controls, suggesting larger studies are needed.
When it comes to statistics and peer review often it can be the blind leading the blind. I review so many papers were recommend to accept only after major revisions due to statistical issues while other reviewers make no comments about the stats.
There are 80 statistical comparisons in this paper and due to sampling variability, experimenter degrees of freedom, and issues with forking paths can expect numerous significant results (> 10) that are just noise. They haven't applied any correction for multiple comparisons as it would make everything nonsignificant and a positive result is much better for future grant success.
My view is that there is very likely an effect on cancers due to cellphone use (one possible cause is the very minor heating effect of the RF which will cause certain cellular processes to go slightly faster). However, this effect is likely to be very tiny and to accurately estimate the size likely need a sample size in the hundreds of thousands. Can almost guarantee that the risk from sunlight exposure on cancer is several orders of magnitude greater.
Future grant access is a moot point  however the research was conducted by the NTP, US Dept of Health and Human Services. Rather than competing for funding  it's their job. The NTP associate director was involved , he makes some pertinent comments on the final page. I'm just guessing that rather than being part of the tinfoil hat brigade, there would have been rather a high level of initial scepticism  that they'd been tasked to carry out research prompted by the incessant bleating by the tinfoil hat brigade, probably including a few politicians (it is in America after all).
I can't comment at all about relative risk level. It was a rat study no doubt expected to show no risk. It appears to have shown some risk. That's the news story  someone will repeat the trials, and we await the mouse trial results. Perhaps also look a bit harder at epidemiological data  some of it may not be as good as has been assumed. Most importantly, if there is a mechanism, find out WTF it is.
Stuff seem to be convinced  nice headline thrown in with some of the serious issues they're covering today:
I have to say, i you think about it, anything can cause anything, because we don't really know.
It's like saying, Donald Trump can be the next [insert whatever you wish].
Involuntary autocorrect in operation on mobile device. Apologies in advance.
I think it is worth reading the response by the Science Media Centre:
