Categories
Miscellaneous Graphics

A Rough Guide to Spotting Bad Science

A Rough Guide to Spotting Bad Science 2015
Click to enlarge

A brief detour from chemistry, branching out into science in general today. This graphic looks at the different factors that can contribute towards ‘bad’ science – it was inspired by the research I carried out for the recent aluminium chlorohydrate graphic, where many articles linked the compound to causing breast cancer, referencing scientific research which drew questionable conclusions from their results.

The vast majority of people will get their science news from online news site articles, and rarely delve into the research that the article is based on. Personally, I think it’s therefore important that people are capable of spotting bad scientific methods, or realising when articles are being economical with the conclusions drawn from research, and that’s what this graphic aims to do. Note that this is not a comprehensive overview, nor is it implied that the presence of one of the points noted automatically means that the research should be disregarded. This is merely intended to provide a rough guide to things to be alert to when either reading science articles or evaluating research.

EDIT: Updated to version 2!

EDIT 2 (April 2015): Update to version 3, taking into account a range of feedback and also sprucing up the design a little.

Support Compound Interest on Patreon for post previews and more!

DOWNLOAD

SUBSCRIBE

The graphic in this article is licensed under a  Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Want to share it elsewhere? See the site’s content usage guidelines.

You can download the previous version as a PDF here.

Purchase in A2, A1 & A0 poster sizes here.

The second version of the graphic is also available in: Portuguese (courtesy of Marco Filipe); Russian (courtesy of orgchem.by); and Spanish, (courtesy of Carolina Jiménez at nuakas.com).

222 replies on “A Rough Guide to Spotting Bad Science”

I think #5 should be replaced with nearly the opposite. Scientists recognize that any conclusions they make are tentative, and treat them as such. Theories are the best possible explanation based on available evidence and if they are impervious to being overturned by contrary evidence, they are not scientific. Certainty only exists in math and philosophy.

Science finds evidence that supports. Pseudoscience finds “facts” that “prove.”

The primary scientific literature and good pop science writing should not use words like “certain,” “absolute,” “definite,” “right,” “correct,” “always,” and their opposites when referring to observational or experimental science.

A fair comment – I guess I was trying to get more at the over-use of these. I think I’ll leave it as is for now (I literally just spent a few hours tweaking it), because I think it’s also important that people realise exactly what you say – that research is just theory, even when it’s not ‘bad’ science, and that they should be open to questioning it, rather than just taking it at face value.

I agree, Alex, and it’s something I push frequently when people claim science has having “the truth”. Science determines the most likely probability, no more than that.

Great. But #11 is a bit weak “potentially replicable” is one thing, having been replicated by independent research is another. Then ideally it should be tested over a wide range of conditions to check for generalisability and see if the result is highly specific/fragile.

Thanks for the pointers – I’ll rephrase that one in the next version to improve it.

Along the same lines, I was definitely looking for some mention of peer review here.

I guess that more or less goes in #12… maybe I’m putting too much emphasis on the phrase “peer review”, since the process of peer review has been more or less described in #12.

Tweaked the description for #11 in this current version now anyway – hopefully a little clearer!

Point 6 – the issue of small sample sizes is relevant to all trials, not just humans. This is going up on the undergraduate common room wall..

Good point – I’m planning to update it a little following people’s feedback this evening, so I’d hold fire on printing it until then!

You might want to raise some stipulations in regards to sample size. Plenty of studies have a few subjects, but have high levels of control by observing them multiple times or creating highly controlled experimental environments. For example, how many different types of balls do you have to roll down a plane before you are convinced you have a relationship between angle and speed? Many behavioral studies in both humans and animals also utilize single-subject design (the name just represent the experimental design, more than 1 subject is often used). Using within-subject designs often allows for greater levels of control, with each subject serving as their own control.

I think I’m going to add some qualifiers underneath the graphic in the post. This was only intended to be a ‘rough’ guide, which got a lot more attention than expected 🙂 Your points are worth including, and since there are probably several other points where exceptions can be made, it makes sense to include more detail outside the graphic rather than trying (and probably failing) to squeeze it all in!

Good work, the graphic includes many useful tips for sorting the wheat from the chaff.
One other guideline that might be worth a mention is that ‘extraordinary claims require extraordinary evidence’. Claims for breakthroughs, especially ones that contradict well-established ideas, should be treated with great caution unless the supporting evidence is extremely strong. Though there are notable exceptions, most such claims turn out to be unfounded, e.g. ‘cold fusion’, or the recent claim that cells can be converted into stem cells by treatment with acid.

Cheers! I’ll try and squeeze that one in somewhere – not sure there’s space for an extra point, but maybe I can fit it into one of the subsections – which do you think would fit it best?

One possibility would be to merge it into Point 11, by making the point that extraordinary claims should be regarded with particular skepticism until tested and reproduced by others.
Another option might be to merge Points 8 and 9, both of which are concerned with the need for good control experiments, and add the tip about the need for extra caution when evaluating extraordinary claims as a separate point.
These are just some very quick thoughts, the dilemma may get worse if other good guidelines are suggested. But do please persevere, this is really useful work.

Thanks, I think merging it with 11 seems like it’ll be the easiest to fit in concisely. Working on version 2 at the moment!

As a general science geek (albeit, an armchair one), I find myself often trying to explain precisely these key points in arguments on the value of numerous claims by sensationalists and info-spammers. This relates the important points succinctly; I don’t have to be a professional researcher to find value in it. Thank you very much for taking the time – it is appreciated. Expect many, many popular networking shares!

Thanks! Whilst the idea was to try and get it out there and persuade more people to consider these points when appraising science in the media, it’s certainly taken off a little more than I expected it to 🙂

#6: The language implies that larger sample sizes will reduce bias. This is a common misconception. If a sampling method is biased, larger sample sizes will not fix the bias unless the sample size gets so large as to approach the population size.

Thanks for the pointer – that wasn’t the intention when writing it, but clearly in trying to cram it into the shortest sentence I could fit, it could come across that way. Since non-representative sampling is mentioned separately, I didn’t mention it as part of that point. Any particular suggestions you’d make for how to reword it so it’s a little clearer?

To keep things simple, I’d just remove “when applied to the whole population”.

In many cases these could be bad reporting of good sciene.

I’ve seen countless cases where properly done, carefully worded papers are spun into something much more exciting but factually wrong.

Then of course the internet is the ultimate “telephone game”, where the original paper is turned into an article for the science fan by a magazine, which gets picked up by a newspaper and reworded for the layman, which in turn gets picked up by my-favourite-conspiracy-theory.com and turned into something bizarre which somehow drags in chemtrails, the Illuminati and Area 51.

That falls a bit for #1 itself.

Yes, blinding makes for better science, but no blinding alone doesn’t make for bad science. There are plenty of constellations in which blinding simply isn’t possible – whether it is that I compare an intravenous drug to an orally taken one, or I compare the use of two different surgical instruments or laboratory analyzers, there are plenty of constellations in which blinding is simply not possible. Sometimes, blinding of the patient and the doctor is impossible, sometimes blinding of the treating physician isn’t possible. That’s simply a fact of life.

A smaller issue is the sample size. Stating that smaller sample size lowers the confidence in the results is waffling a bit. It increases variance and partially as a consequence increases the probability to commit type 2 errors. If I nonetheless see a difference it might very much speak for one existing, if I can see it despite the amount of noise. In any case, small sample sizes should only then be reason for suspicion when larger sample sizes would have been feasible and/or there was no pressing need to make the study to begin with. When dealing with orphan diseases, you’re pretty much stuck with small sample sizes. That can hardly mean that trying to find a cure or at least a remedy against the worst symptoms is “bad science”. It’s just difficult to do reliably.

Good science can do a lot of things, but it can’t shape reality to fit what’s desirable. In general, these are good points, but it misses out on the issue of feasibility. The main point should be to understand that bias can be introduced and what bias can be expected, so that if all else fails, efforts can be made to control for it.

Thanks for taking the time to comment!

The points you make are all inarguable. The goal with this graphic was to create a succinct list of points with a short, readable summary for each – in the process, it was necessary to slightly simplify one or two of the more complicated points, in order to fit them into the available space. Particularly, on points such as blinding, to put in a disclaimer that it’s not relevant in all trials, whilst absolutely correct, would have diminished the amount of text available for explaining it, which I decided to prioritise.

I agree it’s an unhappy compromise, but if you’ve got any suggestions for modifications of the text within the small scope available, it’d be great to hear them! I’m currently making a few changes based on feedback so far, which’ll hopefully address a few issues that have been brought up.

Well stated. On top of some feasibility issues, it should also be noted that we need observational studies and small interventional trials before we can move forward with large well-designed trials. Calling these types of studies “bad science” lacks perspective.

In medicine, we usually move from observational studies (#3, #7) -> open-label trials (#8, #9) -> small RCTs (#6, #10) -> large RCTs. The observational studies, open-label trials and small RCTs are necessary components to the progression of discovering something valuable in medicine. It doesn’t mean that these studies are bad science; it’s just that they are not the highest of quality of evidence.

Perhaps a more apt title would be, “Lower quality science.”

Hi Brant, thanks for the comment. I completely agree that small trials are absolutely a part of science, and with this graphic I’m not trying to say they should be disregarded. As the title, ‘A Rough Guide’, suggests, the aim here was merely to flag up a few points that *may* contribute towards the science or conclusions of a study being unreliable. Too many people will read an article online and it won’t even occur to them to look past the substance of the article into how reliable the research it was based on is. I suppose to an extent they are the target audience of this graphic, though admittedly they’re probably the harder demographic to hit.

I’d say it’s perfectly acceptable for a study to hit one of the criteria in the graphic, say a small sample size, or no blinding, but for its conclusions to still be completely valid. In reality, it’s not something we can put a catch-all ’12 Tell-Tale Signs of Bad Science’ tag on – people need to consider each of these points on a case-by-case basis, and ultimately decide whether the conclusions of the study are realistic in light of this.

I have friends — smart people but not versed in evidence-based medicine — who often understand a few of these points you list and overuse them to analyze studies. The “correlation is not causation” argument is classic; people get very giddy to share this concept; it’s cute. However, dismissing studies alltogether because they have inherent limitations — e.g. an observational study — is almost as blind sighted as not understanding potential limitations in the first place. So in addition to teaching about bad science, I’d love to see an infographic on the totem poll of evidence: http://www.cebm.net/?o=1025

And goes for teaching about medicine in general. I’d love to see more focus on teaching about evidence-based medicine rather than teaching about bad science (of course, they mostly go hand in hand).

Cheers, Brant

Something focusing more on the processes involved in research could be a good idea for a future post. It’d be great to give people a clearer idea of the steps involved in the research process, particularly in the field of medicine.

You can only teach the willing, though, Brant. Given that I once had a laboratory specialist (you should think these would be the number crunchers among physicians) tell me that he didn’t want to hear a maths lecture when I tried to explain statistical constraints on measurement precision, I sometimes have my doubts. And I’ve likewise seen it too often that industry had to stage statistics workshops to enter a meaningful discussion of the pros and cons of certain studies because the necessary knowledge simply wasn’t there in parts of their customer base…

“The ‘correlation is not causation’ argument is classic; people get very giddy to share this concept; it’s cute.”

Almost as cute as your condescension and your straw man?

Few intelligent individuals, whether “versed” in the mysteries of the scientific priesthood or not, “dismiss” studies because they are muddy with respect to causation. Such commentators merely indicate that fact, as well as the fact that some studies (and reports thereof) do a better job of clarifying the relationship between correlation and causation than others.

Oh, and when urging us to learn about “evidence-based medicine”, don’t forget the evidence-based medicine that brought us such wonders as Vioxx.

And after the large RCTs we nowadays go back to observational trials again to observe real-world use…. both in terms of adverse effects as well adherence and ACTUAL clinical and economic benefit.

RCTs are the equivalent of a laboratory experiment: They are great to isolate certain factors and test for them – but we should take great care not to jump to the conclusion that this result is actually relevant in a real-world context.

It’s actually ironic: years ago, payers were all demanding more RCTs, more RCTs, more RCTs – now they are all too often complaining that the results observed in RCTs don’t manifest in actual use.

On point 9 – some modification might be in order to say that in certain circumstances such as in vaccine testing a controlled randomized double blind study against a placebo would be unethical (antivax folks love to pull that one as a way to “prove” vaccines have not been tested adequately for safety). We wouldn’t do that with car seats but other ways of testing efficacy proves them to be a valuable tool in helping to prevent serious injury in car crashes for example. The skeptical OB did a good blog post on this point.

Thanks, I’ll try and squeeze that in somewhere in the updated version.

Sure I just know it’s graphs like these that can be grabbed by the antivax crowd and yes, cherry picked in such a way as to totally distort and supposedly support their POV. Maybe a simple “when appropriate ethically as well as feasibly double blind studies…” Others may have a better way to say it but that is just one option. You could even put a * to point to more detail about that particular concern. Just a thought and thanks!

Thanks for the rewording suggestion – managed to fit it in quite comfortably! The updated version will be up shortly.

It’s a great guide, especially for general use of people not trained in science. I was wondering if it’s ok to translate it to portuguese, I’m part of a skepticism/science non-profit organization and this is the kind of subject that we usually have to deal with. It would be great if it could be shared with non-english speakers. We will give you full credit and direct people to here, of course.

Hi,

Thanks for the comment – if you’d like to translate the text to Portuguese, with a link back to the site, that’d be great! All I’d say is to hold fire until this evening, as I’m currently editing a few points based on feedback that’s been submitted, and then I’ll update the image/PDF.

Important to note that Single Subject Designs do not use random assignment, because we are interested in within-subjects variability. Most people who use single subject designs are interested in individuals on the tails of a normal distribution. We do “cherry pick,” so to speak – but we also include a detailed subject description and are cautious about over-stating the generality of our findings. So a well-designed, replicated Single Subject Design would not fit into Numbers #7, #8, or #9. (Participants serve as their own control by observing their performance over time.) I know that this is a general knowledge poster, but just wanted to point out those details for follow up.

Thanks for the tip – I think it’s probably getting to the stage where I need to start including a long list of disclaimers in the accompanying post, as they’re not all going to fit onto the graphic itself 🙂

This is fantastic, and I’m looking forward to seeing the revised version with others’ suggestions incorporated. While you’re updating, please correct #2: “misinterpreted” is misspelled.

Damn, just changed this to the newer version before noticing that. As soon as I get time I’ll fix it, cheers for the spot!

Here are some of my comments. Note that I am a social scientist doing human subject experiments and this list may not have been intended for this kind of research. However, it seemed to be positioned very generally so here’s my take:

#1 not a fan of sensationalizing but it’s not necessarily a sign of bad science. unfortunately, this is also a direct outcome of the “publish or perish” system.
#5 with the exception of math and its applications, science will always use ‘may’, ‘could’, ‘might’. it’s a stochastic world and that’s why we need/use statistics.
#7 representativeness with regard to variables of interest is what matters. otherwise we’d need clones for replicates.
#8 last sentence: there’s no way you can control “all” variables in human subject research. you control what you can and the rest will be included as noise. you just try to prevent systematic biases.
#9 i’d add “and/or random assignment”.
#10 may be too hard to detect, especially due to file-drawer factor.
#12 disagree with this one. no need for scientific elitism. science, nature, etc. have their own issues and biases. there are a ton of other quality journals that don’t have the same ‘wow’ factor. also citation counts may not be the best indicator of quality.

These may not be relevant to the original intention behind the list. If so, it would be useful if you could either specify the target audience/discipline or clarify some of the assumptions underlying the items you listed. Thanks for receiving comments!

Hi Suzi, thanks for your comment.

Obviously, the graphic was entitled ‘A Rough Guide to Spotting Bad Science’, with emphasis on the ‘rough’! Your comments seem to pertain to the original version of the post, rather than the updated version that’s here on the site, so you’ll find that several of your points may already have been addressed. In addition, I’ve also responded to some of the other comments, some of which made similar suggestions.

As far as the intent of the graphic goes, it is meant to be general, though since this is obviously a chemistry-related site, it was originally made with the natural sciences in mind. The idea was to point out some of the factors that may contribute towards ‘bad’ science, and I’ve tried to make it clear that the presence of some of them does not automatically denote that the research should be written off. It’s very much directed at a general audience, rather than for academics – I’m sure there are plenty more detailed articles out there for those seriously critiquing research methods, and this isn’t intended to be a comprehensive analysis. Obviously, dependent on the branch of science, some of these points may be less relevant, but it would take a little longer for me to make one for each branch of science 🙂

Hopefully this answers a few of your enquiries, and the latest version of the graphic addresses a few of your concerns.

Thank you and I apologize. I saw the updated version after having posted my comments. Yes you have addressed several of those comments already 🙂 Thank you!

No problem 🙂 Glad it’s corrected some of the ambiguities!

[…] A lot of the things I posted recently are predicated on the idea that people can assess critically certain claims that are made against or pro certain scientific findings: A Rough Guide to Spotting Bad Science. […]

[…] Once again I'd have to strongly disagree, you seem to have overlooked those compounds called amino acids which go into peptides to make proteins. If you're not exporting N in this form and only exporting carbohydrates (C, H, O) then you really are growing some shitty produce. As for your many of your other claims some of my (and other's) concerns with them can be summarised at the link below A Rough Guide to Spotting Bad Science | Compound Interest […]

Good job. This applies fairly much to any intellectual endeavor. I will recommend it to my history students.

Sadly, the majority of dietary scientific studies fall victim to 2, 3, 4, 9, 10 and 11, which is why you keep reading conflicting studies all the time. So while a lot of science is done well in many disciplines, not all disciplines follow scientific principles equally.

I rarely comment directly, but this is something that bothers me a lot! So, here are 2 more I typically find in junk “science” . Unpublished data. Often just the results of statistical analysis is given without any access to the data, which leads to my next one, “models” used without explanation. Without the data, ad the math for the model, no one can even attempt to replicate the results, and it’s difficult to see how they manipulated the numbers to get their sensational headlines. Plos one has even now started requiring all data to be public!

Hello.

I think this graphic is great!
Would you me allow to translate it to Japanese?
I’m running a blog about bad science and anti-vaccination rumors in Japanese, and trying to introduce helpful information for spotting bad science by translating it from English to Japanese.
I’d like to share this great graphic with Japanese people.

Sure! If you like, translate the text then send it over, and I can plug it into the original graphic to preserve the formatting and font. Drop a message to my email through the about page and we can discuss it further!

Thank you!

It will be a great help for Japanese people.
Well, I post my translation on my blog and have some discussion with my friends to refine sentences.
Then I will send it to you.

The first thing to grab my attention is the incorrect pluralisation of “3. Conflict of Interests”, this should be “3. Conflicts of Interest”. Please make this right! 🙂
Everything else is great, so helpful to the masses, let’s hope they read it!

Hi Andi, thanks for the comment! Regarding ‘Conflict of Interests’, this is in fact grammatically correct – I even went and double checked to be sure when making the graphic, and it’s used on the European Research Council’s site. Obviously, ‘conflicts of interest’ is correct too, but I’m going to stick with the one already on the graphic for the sake of convenience 🙂

I would be a little careful about “speculative language.” There are times a conclusion is tentative, because it is necessarily tentative. Medicine, in particular, often requires multiple studies to make a real conclusion. It’s more a problem when the speculation or hedging is taken as gospel truth that is the problem.

as a social scientist, I’d say this is applicable to “normal science” and positivist science, but much less so to post-positivist science that many of us engage in.

Could you explain that? What is “normal science” vs post-positivist science for you? As I understand it, post-positivism embraces a probabilistic concept of knowledge, which is very much supported by these points, especially those who touch on statistical aspects such as sample size

Perhaps in human medicine, where trials are expensive, retrospective research compares treatments but two other glaring issues arise here: SELECTION. why did each group get assigned to treatment A or B; and FOLLOW UP: when one looks for late effects of a new treatment, they may not appear for MANY years, even decades. Reporting events at 3 years when the worst effect may occur at 10 or 15 years was just reported in the Journal of Clinical Oncology with a shorter less expensive treatment looking just a little worse at three years, when the peak effect, urinary retention, urethral fibrosis, may not emerge for 5 or 10 years.

[…] reactions we come across on a day-to-day basis. It’s very charming. I particularly liked this infographic on how to spot bad science. Of course it’s the nature of research work that conflicts of interest arise, that results […]

[…] The vast majority of people will get their science news from online news site articles, and rarely delve into the research that the article is based on. It is  therefore important that people are capable of spotting bad scientific methods, or realising when articles are being economical with the conclusions drawn from research, and that’s what this graphic aims to do. Note that this is not a comprehensive overview, nor is it implied that the presence of one of the points noted automatically means that the research should be disregarded. This is merely intended to provide a rough guide to things to be alert to when either reading science articles or evaluating research. From http://www.compoundchem.com/2014/04/02/a-rough-guide-to-spotting-bad-science/ […]

To be scientifically correct, critics of “bad science theories” should apply the principles they recommend to bring proof of what they denigrate.
Misinformation, ridicule and censorship are not scientific arguments.

[…] Guia Rápido para Detectar Ciência Malfeita: poster (pdf, inglês) contendo 12 dicas para se precaver da maliciosa bad science. Resultados escolhidos a dedo, palavras vagas, títulos sensacionalistas, falta de grupo de controle, dentre outras táticas questionáveis são desvendadas nesse guia rápido do Bom Cientista. […]

I’m not sure this was meant for the average Joe or Jane, and my own interpretation, but this graphic helps clarify a lot of what I find grossly flawed with the teacher evaluation systems that are running rampant through the public school systems today. A little attention to the scientific method would be appreciated. With your permission, I will post in my middle school Health class for our future thoughtful citizens to ponder.

[…] 首先,记者的本能是质疑。在全面思考下,看出数据的缺陷并不困难。建议去看以下两个资源获得启发。成为一个数据怀疑论者-这是O’Reilly出版的一本免费电子书。这本书主要讲述了数据科学家关注的问题:量化一个模型和准确的描述它一样重要。第二个是-发现伪科学指南-关于那些最差的科学研究中的错误数据分析。 […]

[…] Guia Rápido para Detectar Ciência Malfeita: poster (pdf, inglês) contendo 12 dicas para se precaver da maliciosa bad science. Resultados escolhidos a dedo, palavras vagas, títulos sensacionalistas, falta de grupo de controle, dentre outras táticas questionáveis são desvendadas nesse guia rápido do Bom Cientista. […]

[…] It attacks trees by feeding on sap and (in classic invading alien style) harms them further by excreting lots of a fluid, coating leaves and stems. According to the Pennsylvania Department of Agriculture the spotted lanternfly “has the potential to greatly impact the grape, fruit tree and logging industries.” Lots more info here, including what to do if you see eggs, catch adults, or discover a major invasion site. Top image: Lawrence Barringer, Pennsylvania Department of Agriculture, bottom two images: Holly Raguza, Pennsylvania Department of Agriculture. A Rough Guide to Spotting Bad Science. […]

Funny thing… as I was discussing vaccinations with someone, I came upon this piece. This in a nut shell, describes Jenner’s research on Smallpox and the vaccine… almost exactly. Amazing. Oh, and the current state of some ‘clinical trials’, especially the ‘cherry-picked’ results.

Andy, thanks for this. I just found it recently, and I shared a link to this page on Google+ with the Science on Google+ community.

Thanks for sharing! It’s great that, even a year on, this post is still proving popular and generating discussion. Whether people agree with every individual point or not is largely immaterial, but it’s good that it’s getting people to consider and debate what constitutes ‘bad science’.

[…] Notice: This information should not be used to make a decision regarding vaccinations. It is imperative to consult with a qualified physician who will understand your (or your child’s) medical conditions, while taking into account any concerns you have regarding the safety of vaccinations. When conducting personal research about vaccinations, the source of information is of critical importance. Be very cautious of information obtained online. Anyone can write anything they want online, whether it is proven, disproven, or even totally false. Personal blogs, general magazine articles, and biased websites (sites that strongly lean towards either side) should be excluded. This guide is a great starting point for spotting ‘bad science’.  […]

[…] Both Lila and Carly felt strongly enough about the recent events to write about them.  And what they wrote inspired me to also say something here.  As another ‘health’ blogger, I have always felt that is important to be clear with my readers about the information I provide. You’ll find paragraphs peppered through my writing, reminding you that every person with Dysautonomia is different.  You’ll have heard me urge you to seek the advice of your own medical professionals. But I am also aware that most of the progress I have made with medication and treatment, has come about because like you, I am a reader of information. I have spent years searching for the piece of the puzzle that might help. I found my piece, and I recognise that sometimes it can just be a sentence, somewhere, that mentions that one word you might need to send you searching on a new tangent.  I sincerely hope you are able to find your puzzle piece. I hope that if the piece you need is a similar shape to mine, you’ll find easy to understand words all about it, right here. If not, keep on looking, keep on seeking. And most importantly, keep on verifying what you read. […]

This article was written on the money. The net is a spawning ground for bad science. Lets put it this way. Science is tadpoles, frogs, misquitos, asthma, molecular science, cures for disease, inventions, space travel. If it says “Gay men make more money, that is just pure
lies.” That isn’t science. Good job

[…] Bruno Latour : description de Controverses (Cours, Mines-ParisTech) Outils pour la cartographie des controverse. How can you tell if scientific evidence is strong or weak? – 8 ways to be a more savvy science reader. A Rough Guide to Spotting Bad Science. […]

Missing:

– Original data not available
– Negative results not reported
– No independent confirmation in a peer-reviewed journal
– Use of arbitrary frequentist stats, rather than Bayesian stats
– (closely related) Abuse of p-values

I would like to share this with some Mozambican colleagues – is there a Portuguese version available or, could I translate it?

[…] A Rough Guide to Spotting Bad Science. Click to enlarge A brief detour from chemistry, branching out into science in general today. This graphic looks at the different factors that can contribute towards ‘bad’ science – it was inspired by the research I carried out for the recent aluminium chlorohydrate graphic, where many articles linked the compound to causing breast cancer, referencing scientific research which drew questionable conclusions from their results. […]

Another reason why access to all original research should be free… The saddest part about pay walls is that the journals don’t put the profit back into science funding, they’re just paying for some fat guy to sit on a beach.

[…] It’s a topic that I’ve heard brought up among academics across disciplines at scientific conferences, at department-sponsored social events, and over coffee among the closest of colleagues. Bad science. Those two words conjure images in the minds of researchers of flawed methodologies, poorly constructed statistical tests, misinterpretations of data (both out of ignorance and out of arrogance), and sensationalized work featuring dubious conclusions that mislead the general public in their understand of what it is that we do, how we do it, and what our data are actually capable of telling us. Most scholars can immediately think of a paper, press release, or in the worst of circumstances a particular colleague that exemplifies the meaning of “bad science” within their field. (Just for fun, here’s a nice infographic called A Rough Guide to Spotting Bad Science.) […]

[…] Here is my second-most favourite infographic of all time. It’s a Rough Guide to Spotting Bad Science. It includes a few points on how to spot bad media coverage (see #1), as well as several on how to spot if the study itself wasn’t awesome. […]

[…] A Rough Guide to Spotting Bad Science. A Rough Guide to Spotting Bad Science Click to enlarge A brief detour from chemistry, branching out into science in general today. This graphic looks at the different factors that can contribute towards ‘bad’ science – it was inspired by the research I carried out for the recent aluminium chlorohydrate graphic, where many articles linked the compound to causing breast cancer, referencing scientific research which drew questionable conclusions from their results. The vast majority of people will get their science news from online news site articles, and rarely delve into the research that the article is based on. EDIT: Updated to version 2! EDIT 2 (April 2015): Update to version 3, taking into account a range of feedback and also sprucing up the design a little. […]

Comments are closed.

%d bloggers like this: