I’ve moved this discussion off to its own thread, and I think its a valuable discussion to have.
Its a difficult discussion but needs to be addressed I think. But, Its a rather hard thing to set a policy …
Here is how I’m looking at it right now, your feedback is appreciated:
-
Science to the Core: We are fundamentally a science-oriented site, looking to learn from the best research and clinical studies being done on the topic of longevity (or related topics), and from the best practitioners of the craft; experts in the field (given that there are not enough hours in the day to read all the papers on a given topic).
-
Details and Nuance are Key: We are a site that is focused on digging into the details, and nuances of potential longevity therapeutics. References to the source papers are an important part of that process. Its important here that we always keep in mind the hierarchy of scientific evidence. Personally I like this iteration of that hierarchy:
-
We are a Fringe Website: At the same time, by any definition, we are a “fringe” website; the percent of people in the US or globally looking at, and / or using rapamycin (or any other drug that may target aging), for longevity is likely a small fraction of one percent of the population. This doesn’t really mean much by itself because every new trend initially starts with a small dedicated group of early adopters, but it does mean we tend to have a lot of independent thinkers here, who are trying many new things. And it also means that we likely have a slightly skewed risk/reward profile and are less risk-adverse, and less likely to “follow the crowd” than most of the population. All of this just means that we have a lot more free range thinkers here, and perhaps more diversity of opinions. I’ve worked in tech, biotech and digital health startups in the Silicon Valley my entire life, and in this environment you get many perspectives. Diversity of opinions can be good as it exposes us to things outside of our “lane”. And ultimately, we’re not building a single “product” in these forums; we’re seeking to figure out the optimal health program for our own unique situation and biology, so there will be many different versions of “right” for different people here. We won’t necessarily come to a consensus opinion on any given approach or therapeutic at any given time, and thats ok.
-
High Signal to Noise Ratio: As you have probably noticed, the stream of new posts and threads here is quite high now; its getting hard to find the time to follow all threads and posts in details. So, I think one of the core philosophies here has to be to try to keep the “signal to noise ratio” as high as possible. We all have only so many hours in the day, we don’t want to visit a forum with a significant number of off-topic or poorly thought out posts or links. This is a little like the “nazi bar” problem @KiSS mentioned earlier, though that is an even more extreme example where the “noise” is actually repulsive in itself even if at a low level. Avoid politics, and focus on science.
-
No Assholes Rule: we’re all on this journey together, we all have blindspots and make mistakes. Assume the best of people, treat others how you want to be treated. Go hard on the science, but easy on the people.
-
A Focus on the Practical / Translation: We are all about figuring out what (therapeutic approach) has the best potential for longevity improvements, and then figuring out the best and most cost-effective way to implement them.
-
Work to identify and Weed-out Pseudo Science and Junk Science. The world seems to have been taken over by “influencers” who want to get attention and they do this by making outrageous claims that drives engagement. Lets work together to sort the junk out and focus on good science.
With the help of ChatGPT Research AI - here are the things we want to watch for:
Common Mistakes in Junk Science
Junk science is misleading, poorly conducted, or deliberately manipulated research that misrepresents reality. It is often used to push agendas, sell products, or mislead the public. Below are some of the most common mistakes and red flags found in junk science.
1. Flawed Study Design
-
Small Sample Sizes → Too few participants = results lack statistical power.
-
No Control Group → Without a proper comparison, conclusions are meaningless.
-
Selection Bias → Choosing participants in a way that favors a desired outcome.
-
Lack of Blinding → If researchers or participants know the treatment, placebo effects or bias can occur.
Example: A supplement study with only 10 participants and no placebo group claims a new vitamin “cures” disease.
2. Misuse of Statistics
-
P-Hacking → Running multiple statistical tests until a “significant” result appears.
-
Cherry-Picking Data → Only reporting data that supports a conclusion, ignoring conflicting results.
-
Misinterpreting Correlation vs. Causation → Just because two things occur together doesn’t mean one causesthe other.
-
Relative vs. Absolute Risk Misrepresentation → “Doubles cancer risk!” (Relative risk) vs. “Risk increases from 0.01% to 0.02%” (Absolute risk).
Example: “Eating chocolate reduces stress by 50%” → based on one subgroup of a study while ignoring the others.
3. Lack of Replication & Peer Review
-
Not Replicable → Results cannot be repeated by other scientists, meaning they may be random chance.
-
Avoiding Peer Review → Studies published in predatory journals or only on preprint servers may lack proper vetting.
-
Ignoring Conflicting Studies → Junk science often dismisses research that contradicts its claims.
Example: A study claims a new drug extends lifespan, but no other researchers can replicate it.
4. Industry Influence & Conflicts of Interest
-
Funding Bias → Studies funded by companies with a vested interest often favor their product.
-
Ghostwriting → Industry-backed research where authors don’t disclose corporate involvement.
-
Conflicted Researchers → Scientists with financial or ideological ties to the subject.
Example: A soda company funds a study that finds “no link between sugar and obesity.”
5. Overhyped or Misleading Claims
-
Sensationalized Language → “Revolutionary breakthrough!” “Scientists PROVE this works!”
-
Oversimplification of Science → Complex issues (e.g., nutrition, climate, genetics) are reduced to soundbites.
-
Overgeneralization → Findings from cell cultures or animal studies are extrapolated to humans.
Example: “This herb cures cancer” → Based on a study where it killed cells in a petri dish (not in humans).
6. Ignoring Biological Plausibility
-
Defies Basic Science → Claims contradict well-established biochemistry, physics, or medicine.
-
Violates Laws of Thermodynamics → Many pseudoscience weight-loss products claim to “burn fat effortlessly”.
Example: “This detox tea removes toxins from your body” → But cannot name a single toxin it removes.
7. Misleading Graphs & Visuals
-
Truncated Y-Axis → Graphs that zoom in on small differences to make them look dramatic.
-
Unlabeled Axes & Scales → Charts without clear numbers or context.
-
Omitting Key Data Points → Leaving out unfavorable results.
Example: A vaccine study graph exaggerates side effects by only showing a subset of total cases.
8. Appeal to Authority & Consensus Manipulation
-
“A Doctor Said It, So It Must Be True” → Citing an “expert” instead of actual evidence.
-
Fake Consensus → “Thousands of scientists agree!” (But they aren’t experts in that field).
-
Using Outliers as Proof → Highlighting one or two contrarian scientists while ignoring the overwhelming majority.
Example: “Dr. X claims vaccines cause autism” → Ignoring the thousands of studies proving otherwise.
9. Reversing the Burden of Proof
-
“Prove Me Wrong” Fallacy → Making a claim and demanding others disprove it instead of providing evidence.
-
Shifting Goalposts → Changing the claim when evidence debunks the original one.
Example: “No study proves 100% that EMFs don’t cause cancer, so they must be dangerous!”
10. Fake or Irrelevant Citations
-
Using Low-Quality Sources → Citing blog posts, non-peer-reviewed papers, or outdated studies.
-
Irrelevant References → Citing studies that don’t actually support the claim.
-
Mistranslating Scientific Language → Misrepresenting what a study actually found.
Example: A diet study cites research on mice, but the headline claims it applies to humans.
Final Summary: How to Spot Junk Science
Mistake |
How to Identify It |
Example |
Flawed Study Design |
Small sample, no control group |
“10 people took this supplement, and they all lost weight!” |
Misuse of Statistics |
Cherry-picking, p-hacking |
“Eating bacon increases cancer risk by 100%!” (Without absolute risk data) |
Lack of Replication |
No peer-reviewed confirmation |
“One study found this, but no one else has replicated it.” |
Industry Influence |
Sponsored research, undisclosed conflicts |
“New drug study funded by the company selling it.” |
Overhyped Claims |
Sensationalized headlines, no nuance |
“This fruit CURES DIABETES!” |
Biological Implausibility |
Violates basic science |
“Quantum energy patches heal your cells!” |
Misleading Graphs |
Truncated axes, cherry-picked data |
“Look at this huge spike! (Zoomed-in graph with tiny actual difference)” |
Fake Consensus |
Appeal to authority, ignoring majority view |
“Dr. X says climate change is fake, so it must be!” |
Burden of Proof Fallacy |
Demanding others disprove nonsense |
“Prove to me Bigfoot doesn’t exist!” |
Fake Citations |
Misrepresenting studies, citing irrelevant work |
“This study on rats proves it works in humans!” |
Conclusion
Junk science is everywhere—in media, health trends, and even scientific journals. Recognizing common mistakes can help you separate real science from pseudoscience.