We live in a world where #FakeNews gets tossed around more casually than it probably should be, but there’s plenty of examples of bad science and misrepresented information to keep us on our toes.
The traditional media has become more fractured than ever, and everywhere we look online there’s some group looking to push an agenda and shape a narrative using a data set and “facts” that they’ve massaged to influence us.
It’s critically important for you to keep both of your eyes peeled for bad science and facts that are misrepresented in an effort to manipulate and persuade you.
Spotting bad science isn’t exactly the easiest thing to pull off, but it isn’t going to take a lot of work to master, either. A little bit of due diligence, and open mind, and an ability to look at the “big picture” of the science and facts you come across will give you an opportunity to gain much more control over your decisions, your behavior, and how you are impacted by the media today.
Here are just a couple of tactics you want to use to keep yourself protected from bad science, tactics that work wonders regardless of the kind of science you’re having peddled towards you.
Click bait journalism is alive and well these days and shows absolutely no sign of slowing down anytime soon. Even though most of us have a pretty sensitive radar for picking up on click bait, bad science headlines often times oversimplify scientific research more subtly than we expect – slipping under our defenses and manipulating our response.
Pretty much any headline starts off with “You won’t believe how these shocking…” Is going to take you for a ride down a sensational headline journey.
News departments very frequently take the facts of a particular story and distort, manipulate, or misinterpret the information they have available to paint a specific portrait and control a narrative. This is where researching a specific story, topic, or bit of science from a number of different angles and analyzing the results will give you the truth – whatever it may be.
Every single sensational story that focused on the MMR vaccinations being linked to the root cause of autism would be a prime example of misinterpreted results. We nearly 15 years past the height of this hysteria and are still watching has hundreds of thousands of lives are negatively impacted by these misinterpreted results.
Conflicts of Interest
Almost all of the super successful businesses on the planet today employee armies of researchers and scientists specifically tasked with creating (and publishing) seemingly independent research that really just supports whatever position, product, or service they are looking to profit from.
The cigarette industry in particular was famous for doing this, especially with advertisements that had claimed “X brand was the only one recommended by medical professionals”.
Correlation and Causation
Science can be massaged and misinterpreted so that correlation and causation are intertwined with one another, but it’s important that you do everything you can to avoid mixing these two up.
Misrepresented science will very often try to sell you a bill of goods, promoting one specific issue as the underlying condition behind a major problem – when we all know that the issues we face (even the most mundane issues) are always impacted by multifaceted influences.
If you stubbed your toe the moment that you flipped a light switch that doesn’t necessarily mean that flipping a light switch causes your toe to become injured.
Every single great scientific discovery was initiated by an interest in mind looking to chase down unsupported conclusions, but that doesn’t mean that every single unsupported conclusion out there has any real merit without a factual basis supporting it.
Bad science very often will make a very outsized and controversial conclusion with next to no supporting material backing it up, using conjecture, hypothesis, and anything but hard science and experimentation to prove out the contention in the first place.
Speculative language is often used to more subtly promote unsupported conclusions. The author of these kinds of arguments inevitably try to frame their argument in friendly language that is more than a little bit wishy-washy, and that’s how you know they aren’t backing up whatever they have to say with a true scientific backbone.
Whenever you come across conclusions that aren’t supported by hard facts or information, but are instead propped up by “we all feel” or “most people know”, you know you’re dealing with unsupported conclusions.
Problems with Sample Size
Professional researchers and scientists understand that sample size is everything, and with a specific sized group of test subjects you can control and massage the data in any and every way you see fit.
If you’re looking to control the outcome of a specific experiment, limiting the size of your sample is the best way to eliminate opportunities for your experiment to go wrong. If, on the other hand, you’re looking to water down the potential outcome when you’re looking for specific results from an experiment you want to blow up the sample size so that things don’t look all that bad at all.
Remember the aftermath of the 2016 election polling controversies, how far off base those polls were with the actual results of that evening? That is a perfect example of problems dealing with pulling conclusions based off of a minuscule sample size.
Unrepresentative Samples Used
To have any real legitimacy, human trials in science should involve participants that have been selected from as large a representation of the population as possible. You want to see real diversity here to make sure that your experiment is as inclusive as possible.
However, those that want to hijack the results and spin things in a specific direction will very often under represent specific groups or sections of the population to control things more effectively. This obviously creates quite a bit of bias towards a particular outcome, but if this bias isn’t unannounced (there are ways to hide these kinds of control mechanisms) it’s easy for the final outcome to be taken at face value.
A periodical called the Literary Digest sent out “phony balance” for the upcoming 1936 presidential election, looking to gain to better understand the pulse of the nation at that particular point in time. They sent the ballots to their subscription list, a subscription list that was made up almost entirely of well-off individuals that owned telephones and automobiles – at that point a very small segment of the population.
They publish these results and couldn’t have been farther off from the reality of that election, all because they used a sample size and sample group that was unrepresentative of the entire voting citizenry at the time.
No Control Group
Control groups are a fantastic way to prove out the effectiveness or ineffectiveness of specific tests, where a control group is given absolutely none of the substances that are being tested on another segment of the experiment.
Without a control group in place it’s impossible to see what “baseline” results would have looked like and gives researchers an opportunity to change the final outcome in a way that can be quite favorable to them – and make the results look a lot larger or much more minimal (depending upon the narrative) than they would be compared to a control group.
You see this kind of approach in the weight loss market all the time. Bad science will claim people have the chance to lose 15, 20, or 30 pounds (or more) in 30 days using a specific pill or piece of exercise equipment – but they never tell you that the people that achieve these results would have enjoyed the same results they had simply cleaned up their diet or exercise normally the way that a control group would have.
Zero Blind Testing
To really have an unbiased test, scientists and researchers should separate control groups from the test groups by keeping everyone in the dark. Double-blind tests create a tremendous amount of anonymity, hiding the test and control groups even from the researchers themselves and not giving them the information until the experiment has concluded.
This guarantees that there is no bias in the examination and that everyone goes through the motions the same as everyone else. Blind tests and double-blind tests aren’t always feasible, however, and in certain situations they may not even be ethical.
Popular soft drink and automotive companies will use blind taste tests against their competition all the time, featuring these tests prominently in their advertising – particularly when they come out on top. This is a perfect example of blind testing as well as unrepresentative sampling, as they are obviously showing only the results that give them the edge when it comes to winning new customers.
Selective Reporting of the Data
Selective reporting, or cherry picking, is when specific data sets from a test or examination are pulled out and separated from the test as a whole, with nothing but the intent to support the initial conclusion and contention that was put forward before the tests or examinations took place.
At the same time, any data sets that do not fall in line with this narrative or conclusion are completely ignored altogether – and sometimes the information is destroyed so that the narrative will continue moving forward unopposed.
This can sometimes be the most challenging of all the bad science strategies to overcome, if only because you have to have access to the entirety of the data to know whether or not you’re dealing with selective reporting in the first place.
Most every single news media outlet is going to take advantage of selective reporting of data sets, particularly when it comes to political issues. Politics will also use this approach to push forward their position or legislative efforts, with groups hiring outside research firms to find the facts they need to present their case in the best light possible.
Results That Cannot Be Replicated
The best, most concrete conclusions in the world of science are those that are clearly outlined with tests and examinations that can be easily repeated by independent researchers following the exact same steps and protocols.
Bad science will utilize anything but standardized testing and examination protocols to find ways to manifest the conclusions or hypothesis they have put forward in the first place. Often times these extraordinary claims lack the extraordinary evidence necessary to back them up, and you’ll know that these kinds of strategies are being put into play when there is zero independent research that can come to the same conclusions.
Anytime you’ve come across “the next big thing” in the world of business, or a lazy man’s pathway to infinite money, with the results all tied up in some mystical approach that they author can’t share with you right now – but will for just three easy payments of $99.99 – you’re likely dealing with a situation of unrepeatable results.
Material that Hasn’t Been Peer Reviewed
The peer review process is a major piece of the scientific puzzle and adds a lot of credibility and substance to even the most outlandish claims or breakthroughs, especially when peers have an opportunity to replicate the results and push the field of science even further into the future.
Scientists as a community are always willing to appraise and critique different studies before they are published, and can act not only as sounding boards and independent researchers but also as a stabilizing voice for scientists that are looking to move forward with a narrative that may not be entirely accurate.
Any research that hasn’t been peer reviewed should be looked at through a pretty suspicious lens. You never know exactly what you’re getting into with papers or research that have been pushed to the publishing floor without any extra eyeballs looking them over.
It was only a single short decade ago that the scientific community had to confront an uncomfortable truth, a truth that revealed their rate of retracted papers that had been published without peer review had skyrocketed up more than 10 times the baseline rate just a few years prior.
The final figures still haven’t been released, but researchers that began to study this phenomenon were amazed to discover that almost 60% of the retractions with a result of outright fraud. One particular scientist, an anesthesiologist in Switzerland, had almost 90 retractions in the years between 2005 and 2008 – publishing information that hadn’t ever been peer reviewed while promising that it had.
A classic example of bad science is the Law Of Attraction.
Photo by rawpixel.com from Pexels
Leave a Reply