Can you spot a women's health fake? Interpreting and understanding clinical data
Article #3 of Raising our voices, even louder! A women's health long-read content series
Welcome back to 'Raising our voices, even louder' – a content series that takes a deep dive into core women's health topics.
I've previously covered brain health and cycle-synced fitness. I now turn my sights to misinformation.
Hang on, how is this a women's health issue, I hear you ask?
In recent years, we've seen a boom in products, solutions and therapies aimed at enhancing healthcare for women. And while the majority of this innovation has positive intentions, it's fair to say some newcomers to the market have spotted a lucrative opportunity.
As FemTech goes mainstream, we're being bombarded by bold claims around 'evidence-based' science and user-led testing. But how can we decipher between quality science and clever marketing in this increasingly busy market?
As I'm sure you know, there is a huge gender health gap in medicine. It's only in recent history that legally NIH-sponsored clinical trials must include women, and many female-focused conditions are chronically under-researched. We've made good progress in the last thirty years, but there is still an awful lot we don't know about women's bodies. Many things can't yet be treated or solved with a pill or supplement – despite what some of the 'menowashing' brands might try to tell you.
The media also has a terrible habit of over-egging and simplifying research. Clickbait headlines can lead us to believe one thing when the truth in the data tells us something else entirely.
Meanwhile, social media is awash with unqualified opinions and misreported science.
I believe what's needed is more clinical and research literacy so that we can all better understand how to spot ‘bad’ science.
This article is here to help demystify clinical reporting and give you the critical thinking tools to question whether a product or solution is the real deal. In the era of fake news, we all need this type of education.
I've teamed up with Elena Mills, a medical affairs expert from The Salve Health Communications, to help cut through the jargon and decipher fact from fiction.
Understanding the basics
Elena, let's start right at the beginning. Can you outline what we mean by a clinical trial and how they work?
"A clinical trial is a research study conducted with human participants to evaluate the effectiveness and tolerability (safety) of new medical treatments, drugs, or devices. Clinical trials follow a rigorous design (known as the trial protocol) and are typically conducted in phases, starting with (sometimes) animals and small groups of healthy human participants to assess safety and how a drug works and is removed from the body. This is followed by larger groups of people with the disease under investigation to determine drug/device efficacy and to monitor side effects.
Clinical trials are essential for advancing medical knowledge and improving patient care, providing critical data that helps regulatory authorities decide whether to approve new therapies and make them available for prescription and sale.
The gold standard of clinical trial is the Randomised, Controlled Trial or 'RCT'. In this type of study, participants are randomly assigned to the treatment under investigation, or the control treatment. Preferably trials should be 'double-blinded', where neither the participants nor the researchers know what group (treatment or control) they've been assigned to."
Ok, so if a solution has used an RCT, we know we'll be looking at quality research. What about when research is published? Is there anything we can look for to know if it's had the scientific seal of approval?
"Importantly, clinical trials results are published in scientific journals and those publications are peer-reviewed. The peer review process involves experts (or 'peers') evaluating a research paper's quality, accuracy, and significance. This process ensures the study meets scientific standards and is credible before it gets published in a medical journal for public access."
What about the type of research conducted or which journals publish the research? Is there a hierarchy?
"There absolutely is. The most basic kind of study is a case study, looking at perhaps just 1 individual. It can highlight unusual or interesting cases in medicine but one (what we call n=1) isn't enough to draw any meaningful conclusions from. It's also why, when someone gives you their 'advice' that can often be unhelpful as it's just one person's (n=1) opinion, which might be quite biased or just plain wrong. I find, asking different people the same question can be helpful for evaluating something if you need to make a decision.
At the top of the hierarchy is the 'systematic review and meta-analysis' which is where similar RCTs are identified through a systematic literature search process and analysed together (the meta-analysis bit!) to critically analyse multiple research studies on a specific topic. Cochrane Reviews are internationally recognized as the highest standard in evidence-based health care research and they are published online in the Cochrane Library."
Bad news
Unfortunately, the media isn't great at reporting the hard facts of science. Headlines use clickbait to lure people in. This often leads to misinformation. One shining example was a recent front-page headline claiming that HIIT-based exercise could lead to weight gain. On digging further into the science, it was clear the findings were from animal models and specifically only on male mice. The reporting exaggerated the results on many levels.
What's the first thing we should consider when reading about a clinical trial, Elena?
"So, it depends whether you're reading the actual clinical trial publication itself or a summary that a journalist, for example, has written about it. In theory, it's possible that members of the public might end up reading the clinical trial publication, but they are pretty technical and written for a healthcare professional audience and are often only available behind a paywall or discoverable on platforms that are not well known to the public (such as PubMed). That said there are now often Plain Language Summaries (or PLS) that are summaries of the clinical trial written for the public so look out for those.
It's much more likely you'll have heard about a clinical trial or its results through secondary reporting, such as on the news, in a magazine, blog article, or via an influencer. This is always going to be just a summary of the original article. This is where it can really help to recognise the good, the bad and the ugly of the science reporting world.
Not all clinical trials are the same though. A clinical trial can have many participants or few, and it might be well designed or poor, and similarly the reporting of clinical trial results in the media can also be done well or poorly. Very often clinical trial results are cherry-picked or reported with no context or information about the study itself, and often, results can be distorted by the use of intentional or unintentional "spin" or over emphasis on beneficial (or indeed, negative) effects.
However, there are some really key things about well-designed clinical trials that can help separate science fact from science spin. One of the key things to look for when critiquing results is the number of people who were in the trial. Particularly in the beauty and nutrition space, you'll often hear about study results that sound amazing but when you dig a bit deeper you realise it was a study conducted in say, 20 people.
In reality, the absolute bare minimum number of participants you'd want to see in any study to be able to say anything clinically meaningful is 100-200.
For context, most clinical trials for new drugs need to show beneficial results with several thousand people before they're approved for use."
Ok, so knowing all this, what's the best way to tell if the reporting is credible?
"Reporting of a clinical trial in the lay media is considered credible when it adheres to several key received principles and practices. I'd be looking for things like:
Context: Has the report offered sufficient background information on the clinical trial, including why it's important, its purpose, methodology, sample size, duration, etc? Also does it add to what we already know?
Expert opinion: Does the report contain quotes from experts, such as doctors specialising in relevant disease areas and from a reputable institution, such as a well-known hospital or university?
Peer review: Was the original article published in a peer-reviewed journal?
Balance: Does the report present both the potential benefits and risks or limitations of a treatment or study and does it avoid sensationalising the results?
Transparency: Who funded the study and are there any potential conflicts of interest, basically who stands to benefit from the results?
Bad science
Despite its name, there has been a flurry of products within the FemTech sector, which are 'direct-to-consumer' lifestyle products such as supplements and beauty products. As women's health goes mainstream, we'll likely see more brands loosely positioning their products under this umbrella. But how can we be sure of the quality of these solutions? How can we spot those simply making a fast buck?
Can we do anything to better understand the efficacy of these non-medical products like vitamins?
"Medicines that need official regulatory authority approval have to have been assessed to meet stringent standards, but products like vitamins or beauty products, aren't usually subject to such rigorous testing but will often use very scientific-sounding health claims in their marketing.
Going back to the principles of good reporting, and good trial design, the biggest red flag for me is number of participants. If the number of people in a study is less than 100 (as they very often are) you can safely ignore it and consider it a marketing exercise rather than robust, reliable clinical study or trial.
Also, if the study doesn't have a control or comparison group, I'd pretty much ignore it."
So, there we have it. The beginner's guide to navigating scientific data and clinical trials. To finish, Elena, can you summarise what are the three things people need to check or read when evaluating clinical trial data?
· 1: Study design: better study designs mean more reliable results so look for whether the study is randomized, the presence (or absence!) of a control or comparator group, if there has been study blinding (double blind is gold standard) and patient number (>100 patients and preferably more!)
· 2: Significance: statistical analysis is crucial for determining whether the results are likely due to the treatment or just random chance. Look for p-values <0.05 and beyond significance, look at the effect size (how large the difference is between groups studied) and also the confidence intervals (which provide a range within which the true effect size is likely to fall). Narrower confidence intervals suggest more precise (reliable) results.
· 3: Source of funding: check who has funded the study and think about whose interests the study will serve i.e. who stands to benefit from the results? Look at whether the study has been published in a reputable, peer-reviewed journal and check the funding source or sponsor of the study.
The reality of women's health is that we've been starved of solutions for so long that we're ecstatic to hear companies ready to solve our problems. Plenty of brilliant founders and brands in this space are doing things right. But we need to be wary of throwing our money at products that are all style and no substance.
About The Salve
The Salve is a women-founded, medical affairs consultancy and a Certified B Corporation™ designed to creative positive impact in healthcare through remarkable and purposeful communications. The Salve work with global pharmaceutical and biotechnology companies to support a variety of scientific communication functions, delivering traditional medical writing services such as publication planning, meetings and events and digital medical education programmes. 10% of their work is carried out pro bono for not-for-profit organisations. The Salve are also specialist strategic consultants, working particularly in the areas of women's health, climate, health and equity and healthcare transformation. Follow their journey at www.thesalvehealth.com
Further reading and resources:
The Patient Information Forum: https://piftick.org.uk/finding-trusted-health-information/tips-and-guides/false-health-information-the-warning-signs/
National Academy of Medicine - Identifying Credible Sources of Health Information on Digital Platforms: https://nam.edu/programs/principles-for-defining-and-verifying-the-authority-of-online-providers-of-health-information/
YouTube Health’s commitment to credible health information: https://www.youtube.com/intl/ALL_uk/howyoutubeworks/product-features/health-information/
Saving, bookmarking and will be sharing this far and wide Hannah and Elena. This is so useful!! Thank you.
Loved writing this piece with you Hannah! This is about health literacy and ultimately health equity. We need to hone our critical thinking skills to really look closely at, and question the information we’re being presented with in women’s health (indeed in all aspects of health and life!). Knowledge is power after all.