from The Academic Health Economists’ Blo… at http://bit.ly/2ZtsFoX on September 2, 2019 at 12:03PM
Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ [PubMed] Published 28th August 2019
RCTs are the gold standard primary study to estimate the effect of treatments but are often far from perfect. The question is the extent to which their flaws make a difference to the results. Well, RoB 2 is your new best friend to help answer this question.
Developed by a star-studded team, the RoB 2 is the update to the original risk of bias tool by the Cochrane Collaboration. Bias is assessed by outcome, rather than for the whole RCT. For me, this makes sense. For example, the primary outcome may be well reported, yet the secondary outcome, which may be the outcome of interest for a cost-effectiveness model, much less so.
Bias is considered in terms of 5 domains, with the overall risk of bias usually corresponding to the worst risk of bias in any of the domains. This overall risk of bias is then reflected in the evidence synthesis, with, for example, a stratified meta-analysis.
The paper is a great read! Jonathan Sterne and colleagues explain the reasons for the update and the process that was followed. Clearly, there was quite a lot of thought given to the types of bias and to develop questions to help reviewers assess it. The only downside is that it may require more time to apply, given that it needs to be done by outcome. Still, I think that’s a price worth paying for more reliable results. Looking forward to seeing it in use!
Characteristics and methods of incorporating randomised and nonrandomised evidence in network meta-analyses: a scoping review. Journal of Clinical Epidemiology [PubMed] Published 3rd May 2019
In keeping with the evidence synthesis theme, this paper by Kathryn Zhang and colleagues reviews how the applied literature has been combining randomised and non-randomised evidence. The headline findings are that combining these two types of study designs is rare and, when it does happen, naïve pooling is the most common method.
I imagine that the limited use of non-randomised evidence is due to its risk of bias. After all, it is difficult to ensure that the measure of association from a non-randomised study is an estimate of a causal effect. Hence, it is worrying that the majority of network meta-analyses that did combine non-randomised studies did so with naïve pooling.
This scoping review may kick start some discussions in the evidence synthesis world. When should we combine randomised and non-randomised evidence? How best to do so? And how to make sure that the right methods are used in practice? As a cost-effectiveness modeller, with limited knowledge of evidence synthesis, I’ve grappled with these questions myself. Do get in touch if you have any thoughts.
A cost-effectiveness analysis of shortened direct-acting antiviral treatment in genotype 1 noncirrhotic treatment-naive patients with chronic hepatitis C virus. Value in Health [PubMed] Published 17th May 2019
Rarely we see a cost-effectiveness paper where the proposed intervention is less costly and less effective, that is, in the controversial southwest quadrant. This exceptional paper by Christopher Fawsitt and colleagues is a welcome exception!
Christopher and colleagues looked at the cost-effectiveness of shorter treatment durations for chronic hepatitis C. Compared with the standard duration, the shorter treatment is not as effective, hence results in fewer QALYs. But it is much cheaper to treat patients over a shorter duration and re-treat those patients who were not cured, rather than treat everyone with the standard duration. Hence, for the base-case and for most scenarios, the shorter treatment is cost-effective.
I’m sure that labelling a less effective and less costly option as cost-effective may have been controversial in some quarters. Some may argue that it is unethical to offer a worse treatment than the standard even if it saves a lot of money. In my view, it is no different from funding better and more costlier treatments, given that the savings will be borne by other patients who will necessarily have access to fewer resources.
The paper is beautifully written and is another example of an outstanding cost-effectiveness analysis with important implications for policy and practice. The extensive sensitivity analysis should provide reassurance to the sceptics. And the discussion is clever in arguing for the value of a shorter duration in resource-constrained settings and for hard to reach populations. A must read!