Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, the problem is a lot deeper than just "capturing the protocol in the methods section is hard". (There are very interesting articles about how word frequency shifting might be a confounding factor, which can cause old - from the 70s and 80s - papers to fail replication. But then there's no follow up with a new corpus and new replication.)

It's not just bad (vacuous) science, at its core it's people who act in a bad (selfish, non-scientific) ways: https://statmodeling.stat.columbia.edu/2016/09/21/what-has-h...

> I guess the most important thing we could learn from this is that it's important to replicate any current studies right now, and not wait forty years to do so.

Yes, that too, but what's even more important is to shift into a mindset that starts with good models, good data generation processes (ie. experiments), then we can check and compare their predictive power. Otherwise we get these statistically flawless abominations that prove ESP:

https://statmodeling.stat.columbia.edu/2011/01/11/one_more_t...

And, sure, yeah, it's hard to do this. But otherwise we'll have nothing more than just-so stories supported by random data that happened to break through some significance threshold.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: