Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


View this topic in a long page with up to 500 replies per page Create new topic
1 | 2 | 3 | 4 | 5 | 6
 
 
 

Affiliate link: Shop online at TheMarket for fashion, electronics, computers, books and more.
zenourn
164 posts

Master Geek

Trusted
DR

  #1561547 29-May-2016 10:02
Send private message



according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.


p of 0.047654 means the probability that the observed effect is due to chance is 4.8%


p of 0.1234 means that probability of the observed effect is due to chance is 12.3%






No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.

I've read over the paper and the analysis is very flawed. If they stick with frequentist analysis at least need some form of multiple comparison correction which makes everything non-significant. Ideally need to fit a single Bayesian regression model with sensible priors, in which case all effects would also disappear (I.e., would get high probability that effect size is close to zero).

Results here are indistinguishable from noise. However, plenty of journals now that will publish this type of flawed research for a fee.



freitasm
BDFL - Memuneh
68531 posts

Uber Geek

Administrator
Trusted
Geekzone
Lifetime subscriber

  #1561554 29-May-2016 10:28
Send private message

jmh:

 

freitasm:

 

Damn, even publication sometimes means nothing. The vaccine - autism study was published on The Lancet and then retracted when they found out the author was trying to hawk his own "non vaccine vaccine".

 

 

His patent was for a measles vaccine - at that time he wasn't anti-vaccine, although he might be now.  

 

 

From https://en.wikipedia.org/wiki/MMR_vaccine_controversy 

 

 

 

The MMR vaccine controversy started with the 1998 publication of a fraudulent research paper in the medical journal The Lancet that lent support to the later discredited claim that colitis and autism spectrum disorders are linked to the combined measles, mumps, and rubella (MMR) vaccine.[1] Aspects of the media coverage were criticized for naïve reporting and lending undue credibility to the architect of the fraud, Andrew Wakefield.

 

Investigations by Sunday Times journalist Brian Deer reported that Andrew Wakefield, the author of the original research paper, had multiple undeclared conflicts of interest,[2][3] had manipulated evidence,[4] and had broken other ethical codes. The Lancet paper was partially retracted in 2004, and fully retracted in 2010, when The Lancet's editor-in-chief Richard Horton described it as "utterly false" and said that the journal had been "deceived."[5] Wakefield was found guilty by the General Medical Council of serious professional misconduct in May 2010 and was struck off theMedical Register, meaning he could no longer practice as a doctor in the UK.[6] In 2011, Deer provided further information on Wakefield's improper research practices to the British medical journal, BMJ, which in a signed editorial described the original paper as fraudulent.[7][8] The scientific consensus is the MMR vaccine has no link to the development of autism, and that this vaccine's benefits greatly outweigh its risks.

 

Following the initial claims in 1998, multiple large epidemiological studies were undertaken. Reviews of the evidence by the Centers for Disease Control and Prevention,[9] the American Academy of Pediatrics, the Institute of Medicine of the US National Academy of Sciences,[10] the UKNational Health Service,[11] and the Cochrane Library[12] all found no link between the MMR vaccine and autism. While the Cochrane review expressed a need for improved design and reporting of safety outcomes in MMR vaccine studies, it concluded that the evidence of the safety and effectiveness of MMR in the prevention of diseases that still carry a heavy burden of morbidity and mortality justified its global use, and that the lack of confidence in the vaccine had damaged public health.[12] A special court convened in the United States to review claims under the National Vaccine Injury Compensation Program rejected compensation claims from parents of autistic children.[13][14]

 

The claims in Wakefield's 1998 The Lancet article were widely reported;[15] vaccination rates in the UK and Ireland dropped sharply,[16] which was followed by significantly increased incidence of measles and mumps, resulting in deaths and severe and permanent injuries.[17] Physicians, medical journals, and editors[18][19][20][21][22] have described Wakefield's actions as fraudulent and tied them to epidemics and deaths,[23][24] and a 2011 journal article described the vaccine–autism connection as "perhaps the most damaging medical hoax of the last 100 years".[25]

 

The first paragraph is important. As I said he had a conflict of interest because he was trying to prove the existing vaccine had problems, while at the same time trying to create a commercial environment for his own version of a vaccine without those.

 

 





 

 

These links are referral codes

 

Geekzone broadband switch | Eletricity comparison and switch | Hatch investment (NZ$ 10 bonus if NZ$100 deposited within 30 days) | Sharesies | Mighty Ape | Backblaze | Amazon | My technology disclosure


 
 
 
 


Batman
Mad Scientist
22939 posts

Uber Geek

Trusted
Lifetime subscriber

  #1561623 29-May-2016 12:18
Send private message

Research fraud is widespread. Greed and ego being the main drive.




Involuntary autocorrect in operation on mobile device. Apologies in advance.


gzt

gzt

11645 posts

Uber Geek

Lifetime subscriber

  #1561630 29-May-2016 12:28
Send private message

Wow. I did not know the MMR issue went back to a 1998 Lancet article and was so massively influential. I recalled it as 2004/2010 when the controversy and debunking started.

Batman
Mad Scientist
22939 posts

Uber Geek

Trusted
Lifetime subscriber

  #1561679 29-May-2016 14:42
Send private message

Rikkitic:

 

At the risk of hijacking this thread, I would be very interested to know what it is that makes ordinary people so suspicious of science yet so quick to embrace magic. I happen to have several friends who unfortunately fall into this category. They are kind, intelligent people, which is why they are friends, but they seem convinced that scientific research, including the research that has led to most of the home comforts they take for granted, is somehow tainted and part of a vast conspiracy. Yet they have no problem accepting homoeopathy, astrology, Tarot, and in one case, at least, even the existence of angels! Or to be correct, this friend, regrettably a Kiwi, seems to believe in someone who claims to talk to angels, who tell him things like global warming is just a myth so don't worry about it. Apparently he is very popular in the States and has made a ton of money writing books. I just don't get it.    

 

 

 

 

Without the risk of derailing the thread, some people think of science as some weird black magic perhaps. You also can have good science, bad science, and black magic - as in theoretical physics. Eg there is a physics field called "our universe is part of a computer simulation - probably of some superior alien beings". I'm sure there are "scientific studies" on homeopathy - you just need to find it.

 

The trick is to separate good science from bad science, and how it applies to the real world.

 

For example, you can have a perfect study to show that cellphone radiation causes cancer in humans. But if and only if after 80 years of 20hrs a day of usage. 





Involuntary autocorrect in operation on mobile device. Apologies in advance.


Batman
Mad Scientist
22939 posts

Uber Geek

Trusted
Lifetime subscriber

  #1561709 29-May-2016 15:00
Send private message

zenourn:

 

according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.

 

 

 

p of 0.047654 means the probability that the observed effect is due to chance is 4.8%

 

 

 

p of 0.1234 means that probability of the observed effect is due to chance is 12.3%

 






No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.



 

Same thing.

 

Edit: in fact you must be wrong. you only have one null hypothesis in a study but the study statistics can have hundreds of p values





Involuntary autocorrect in operation on mobile device. Apologies in advance.


Fred99
11005 posts

Uber Geek


  #1561755 29-May-2016 15:54
Send private message

zenourn:

 

according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.

 

 

 

p of 0.047654 means the probability that the observed effect is due to chance is 4.8%

 

 

 

p of 0.1234 means that probability of the observed effect is due to chance is 12.3%

 






No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.

I've read over the paper and the analysis is very flawed. If they stick with frequentist analysis at least need some form of multiple comparison correction which makes everything non-significant. Ideally need to fit a single Bayesian regression model with sensible priors, in which case all effects would also disappear (I.e., would get high probability that effect size is close to zero).

Results here are indistinguishable from noise. However, plenty of journals now that will publish this type of flawed research for a fee.


 

 

 

I'm glad that's all cleared up then.  

 

Only one of the peer reviewers had that criticism (results indistinguishable from noise).

 

Speaking of noise, in appendix D is controls history data.  It's for male rats only - why not the female rat historical control data as well? (the historical %  incidence is used - but no breakdown is shown).

 

The male rat control data shows with both data sets there are two outliers, glioma etc (2% out of 10 studies / 550 rats) there's an 8% in one study (4/50).  In schwannoma (1.3% out of 13 studies / 699 rats) there's a 6% (3/50).  That in itself isn't perhaps so interesting - except the outlier results are from the same trial - commencing Feb 17 2011.  I don't think anybody involved would suggest that environment, or genetics, or some unknown "something else" is not going on with the controls - and that's perhaps why there's no discussion, as it's just a generally accepted fact that there's a wide range of variance between trials.  Take out the historical trial averages, and you've got a small trail with very significant looking results (control incidence is zero).  Sure - it's a mess, but to not release the paper would have been a mistake IMO.

 

There are comments in the peer reviews about reliance on historical controls, suggesting larger studies are needed.  

 

 


 
 
 
 


zenourn
164 posts

Master Geek

Trusted
DR

  #1561789 29-May-2016 16:39
Send private message

joker97:

 

zenourn:

 

according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.

 

p of 0.047654 means the probability that the observed effect is due to chance is 4.8%

 

p of 0.1234 means that probability of the observed effect is due to chance is 12.3%


No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.

 

Same thing.

 

Edit: in fact you must be wrong. you only have one null hypothesis in a study but the study statistics can have hundreds of p values

 

 

 

 

Definitely not the same thing! If stated as "p of 0.047654 means their is a 4.8% probability of observing by chance an effect that is equal or greater that seen in this particular sample" is getting better (although several technicalities still to consider)." The probability that the observed effect is due to chance is actually very close to 100% as if you repeat the procedure you're very unlikely to get the same observed effect.

 

Every single p-value has an associated null hypothesis (in this case one example is "The difference in the rate of malignant glioma in male rats exposed to 1.5 W/kg GSM-modulated RFR compared to male rats not exposed to radiation is zero).

 

I'm a mathematician and work with frequentist and Bayesian statistical models in a medical area on a daily basis ;-)

 

 


Batman
Mad Scientist
22939 posts

Uber Geek

Trusted
Lifetime subscriber

  #1561800 29-May-2016 16:58
Send private message

zenourn:

 

joker97:

 

zenourn:

 

according to the statistical test that is used, the probability that the observed effect is due to chance/coincidence.

 

p of 0.047654 means the probability that the observed effect is due to chance is 4.8%

 

p of 0.1234 means that probability of the observed effect is due to chance is 12.3%


No. It is the probability of observing data as extreme as this given that the null hypothesis is true. Something that scientists get wrong all the time.

 

Same thing.

 

Edit: in fact you must be wrong. you only have one null hypothesis in a study but the study statistics can have hundreds of p values

 

 

 

 

Definitely not the same thing! If stated as "p of 0.047654 means their is a 4.8% probability of observing by chance an effect that is equal or greater that seen in this particular sample" is getting better (although several technicalities still to consider)." The probability that the observed effect is due to chance is actually very close to 100% as if you repeat the procedure you're very unlikely to get the same observed effect.

 

Every single p-value has an associated null hypothesis (in this case one example is "The difference in the rate of malignant glioma in male rats exposed to 1.5 W/kg GSM-modulated RFR compared to male rats not exposed to radiation is zero).

 

I'm a mathematician and work with frequentist and Bayesian statistical models in a medical area on a daily basis ;-)

 

 

 

 

I guess each mathematician makes their own definition. In some cases, they follow the chance explanation like

 

https://practice.sph.umich.edu/micphp/epicentral/p_value.php





Involuntary autocorrect in operation on mobile device. Apologies in advance.


zenourn
164 posts

Master Geek

Trusted
DR

  #1561801 29-May-2016 16:59
Send private message

Fred99:

 

I'm glad that's all cleared up then.  

 

Only one of the peer reviewers had that criticism (results indistinguishable from noise).

 

Speaking of noise, in appendix D is controls history data.  It's for male rats only - why not the female rat historical control data as well? (the historical %  incidence is used - but no breakdown is shown).

 

The male rat control data shows with both data sets there are two outliers, glioma etc (2% out of 10 studies / 550 rats) there's an 8% in one study (4/50).  In schwannoma (1.3% out of 13 studies / 699 rats) there's a 6% (3/50).  That in itself isn't perhaps so interesting - except the outlier results are from the same trial - commencing Feb 17 2011.  I don't think anybody involved would suggest that environment, or genetics, or some unknown "something else" is not going on with the controls - and that's perhaps why there's no discussion, as it's just a generally accepted fact that there's a wide range of variance between trials.  Take out the historical trial averages, and you've got a small trail with very significant looking results (control incidence is zero).  Sure - it's a mess, but to not release the paper would have been a mistake IMO.

 

There are comments in the peer reviews about reliance on historical controls, suggesting larger studies are needed.  

 

 

 

 

 

 

When it comes to statistics and peer review often it can be the blind leading the blind. I review so many papers were recommend to accept only after major revisions due to statistical issues while other reviewers make no comments about the stats.

There are 80 statistical comparisons in this paper and due to sampling variability, experimenter degrees of freedom, and issues with forking paths can expect numerous significant results (> 10) that are just noise. They haven't applied any correction for multiple comparisons as it would make everything non-significant and a positive result is much better for future grant success.

My view is that there is very likely an effect on cancers due to cellphone use (one possible cause is the very minor heating effect of the RF which will cause certain cellular processes to go slightly faster). However, this effect is likely to be very tiny and to accurately estimate the size likely need a sample size in the hundreds of thousands. Can almost guarantee that the risk from sunlight exposure on cancer is several orders of magnitude greater.


Fred99
11005 posts

Uber Geek


  #1561821 29-May-2016 17:51
Send private message

zenourn:

 

Fred99:

 

I'm glad that's all cleared up then.  

 

Only one of the peer reviewers had that criticism (results indistinguishable from noise).

 

Speaking of noise, in appendix D is controls history data.  It's for male rats only - why not the female rat historical control data as well? (the historical %  incidence is used - but no breakdown is shown).

 

The male rat control data shows with both data sets there are two outliers, glioma etc (2% out of 10 studies / 550 rats) there's an 8% in one study (4/50).  In schwannoma (1.3% out of 13 studies / 699 rats) there's a 6% (3/50).  That in itself isn't perhaps so interesting - except the outlier results are from the same trial - commencing Feb 17 2011.  I don't think anybody involved would suggest that environment, or genetics, or some unknown "something else" is not going on with the controls - and that's perhaps why there's no discussion, as it's just a generally accepted fact that there's a wide range of variance between trials.  Take out the historical trial averages, and you've got a small trail with very significant looking results (control incidence is zero).  Sure - it's a mess, but to not release the paper would have been a mistake IMO.

 

There are comments in the peer reviews about reliance on historical controls, suggesting larger studies are needed.  

 

 

 

 

 

 

When it comes to statistics and peer review often it can be the blind leading the blind. I review so many papers were recommend to accept only after major revisions due to statistical issues while other reviewers make no comments about the stats.

There are 80 statistical comparisons in this paper and due to sampling variability, experimenter degrees of freedom, and issues with forking paths can expect numerous significant results (> 10) that are just noise. They haven't applied any correction for multiple comparisons as it would make everything non-significant and a positive result is much better for future grant success.

My view is that there is very likely an effect on cancers due to cellphone use (one possible cause is the very minor heating effect of the RF which will cause certain cellular processes to go slightly faster). However, this effect is likely to be very tiny and to accurately estimate the size likely need a sample size in the hundreds of thousands. Can almost guarantee that the risk from sunlight exposure on cancer is several orders of magnitude greater.

 

 

 

 

Future grant access is a moot point - however the research was conducted by the NTP, US Dept of Health and Human Services.  Rather than competing for funding - it's their job. The NTP associate director was involved , he makes some pertinent comments on the final page.  I'm just guessing that rather than being part of the tinfoil hat brigade, there would have been rather a high level of initial scepticism - that they'd been tasked to carry out research prompted by the incessant bleating by the tinfoil hat brigade, probably including a few politicians (it is in America after all).

 

I can't comment at all about relative risk level.  It was a rat study no doubt expected to show no risk. It appears to have shown some risk.  That's the news story - someone will repeat the trials, and we await the mouse trial results.  Perhaps also look a bit harder at epidemiological data - some of it may not be as good as has been assumed.  Most importantly, if there is a mechanism, find out WTF it is.  


gzt

gzt

11645 posts

Uber Geek

Lifetime subscriber

  #1561882 29-May-2016 20:28
Send private message

Scientific American takes a look.

The dose response claims are repeated there. It appears the data released so far are part of a larger study programme. I have not seen anything definitive on this yet, it seems implied in several articles.

This will take some time.

Fred99
11005 posts

Uber Geek


  #1561966 30-May-2016 06:19
Send private message

Stuff seem to be convinced - nice headline thrown in with some of the serious issues they're covering today:

 


Batman
Mad Scientist
22939 posts

Uber Geek

Trusted
Lifetime subscriber

  #1561968 30-May-2016 06:53
Send private message

I have to say, i you think about it, anything can cause anything, because we don't really know.

 

It's like saying, Donald Trump can be the next [insert whatever you wish].





Involuntary autocorrect in operation on mobile device. Apologies in advance.


Oncop53
266 posts

Ultimate Geek


1 | 2 | 3 | 4 | 5 | 6
View this topic in a long page with up to 500 replies per page Create new topic




News »

NZ-based specialty underwriting agency expands personal cyber offering internationally
Posted 1-Oct-2020 19:02


Unisys launches ClearPath MCP software on Microsoft Azure
Posted 1-Oct-2020 18:47


Slingshot offering ugly-modem to help reduce e-waste in New Zealand
Posted 30-Sep-2020 16:01


AWS launches new edge location in New Zealand
Posted 30-Sep-2020 15:35


Amazon introduces new Echo devices
Posted 25-Sep-2020 11:56


Mad Catz introduces new S.T.R.I.K.E. 13 Mechanical Gaming Keyboard
Posted 25-Sep-2020 11:34


Vodafone NZ upgrades international submarine network
Posted 25-Sep-2020 09:09


Jabra announces wireless noise-cancelling airbuds, upgrade existing model
Posted 24-Sep-2020 14:43


Nokia 3.4 to be available in New Zealand
Posted 24-Sep-2020 14:34


HP announces new HP ENVY laptops aimed at content creators
Posted 24-Sep-2020 14:02


Logitech introduce MX Anywhere 3
Posted 21-Sep-2020 21:17


Countdown unveils contactless shopping with new Scan&Go tech
Posted 21-Sep-2020 09:48


HP unveils new innovations for businesses adapting to rapidly evolving workstyles and workforces
Posted 17-Sep-2020 15:36


GoPro launches new HERO9 Black camera
Posted 17-Sep-2020 09:45


Telecommunications industry launches new 5G Facts website
Posted 17-Sep-2020 07:56



Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.


Support Geekzone »

Our community of supporters help make Geekzone possible. Click the button below to join them.

Support Geezone on PressPatron



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.