from The Academic Health Economists’ Blo… at http://bit.ly/2Bt9xsv on October 21, 2019 at 12:07PM
Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Quantifying how diagnostic test accuracy depends on threshold in a meta-analysis. Statistics in Medicine [PubMed] Published 30th September 2019
A diagnostic test is often based on a continuous measure, e.g. cholesterol, which is dichotomised at a certain threshold to classify people as ‘test positive’, who should be treated, or ‘test negative’, who should not. In an economic evaluation, we may wish to compare the costs and benefits of using the test at different thresholds. For example, the cost-effectiveness of offering lipid lowering therapy for people with cholesterol over 7 mmol/L vs over 5 mmol/L. This is straightforward to do if we have access to a large dataset comparing the test to its gold standard to estimate its sensitivity and specificity at various thresholds. It is quite the challenge if we only have aggregate data from multiple publications.
In this brilliant paper, Hayley Jones and colleagues report on a new method to synthesise diagnostic accuracy data from multiple studies. It consists of a multinomial meta-analysis model that can estimate how accuracy depends on the diagnostic threshold. This method produces estimates that can be used to parameterise an economic model.
These new developments in evidence synthesis are very exciting and really important to improve the data going into economic models. My only concern is that the model is implemented in WinBUGS, which is not a software that many applied analysts use. Would it be possible to have a tutorial, or even better, include this method in the online tools available in the Complex Reviews Support Unit website?
Early economic evaluation of diagnostic technologies: experiences of the NIHR Diagnostic Evidence Co-operatives. Medical Decision Making [PubMed] Published 26th September 2019
Keeping with the diagnostic theme, this paper by Lucy Abel and colleagues reports on the experience of the Diagnostic Evidence Co-operatives in conducting early modelling of diagnostic tests. These were established in 2013 to help developers of diagnostic tests link-up with clinical and academic experts.
The paper discusses eight projects where economic modelling was conducted at an early stage of project development. It was fascinating to read about the collaboration between academics and test developers. One of the positive aspects was the buy-in of the developers, while a less positive one was the pressure to produce evidence quickly and that supported the product.
The paper is excellent in discussing the strengths and challenges of these projects. Of note, there were challenges in mapping out a clinical pathway, selecting the appropriate comparators, and establishing the consequences of testing. Furthermore, they found that the parameters around treatment effectiveness were the key driver of cost-effectiveness in many of the evaluations. This is not surprising given that the benefits of a test are usually in better informing the management decisions, rather than via its direct costs and benefits. It definitely resonates with my own experience in conducting economic evaluations of diagnostic tests (see, for example, here).
Following on from the challenges, the authors suggest areas for methodological research: mapping the clinical pathway, ensuring model transparency, and modelling sequential tests. They finish with advice for researchers doing early modelling of tests, although I’d say that it would be applicable to any economic evaluation. I completely agree that we need better methods for economic evaluation of diagnostic tests. This paper is a useful first step in setting up a research agenda.
A second chance to get causal inference right: a classification of data science tasks. Chance [arXiv] Published 14th March 2019
This impressive paper by Miguel Hernan, John Hsu and Brian Healy is an essential read for all researchers, analysts and scientists. Miguel and colleagues classify data science tasks into description, prediction and counterfactual prediction. Description is using data to quantitatively summarise some features of the world. Prediction is using the data to know some features of the world given our knowledge about other features. Counterfactual prediction is using the data to know what some features of the world would have been if something hadn’t happened; that is, causal inference.
I found the explanation of the difference between prediction and causal inference quite enlightening. It is not about the amount of data or the statistical/econometric techniques. The key difference is in the role of expert knowledge. Predicting requires expert knowledge to specify the research question, the inputs, the outputs and the data sources. Additionally, causal inference requires expert knowledge “also to describe the causal structure of the system under study”. This causal knowledge is reflected in the assumptions, the ideas for the data analysis, and for the interpretation of the results.
The section on implications for decision-making makes some important points. First, that the goal of data science is to help people make better decisions. Second, that predictive algorithms can tell us that decisions need to be made but not which decision is most beneficial – for that, we need causal inference. Third, many of us work on complex systems for which we don’t know everything (the human body is a great example). Because we don’t know everything, it is impossible to predict with certainty what would be the consequences of an intervention in a specific individual from routine health records. At most, we can estimate the average causal effect, but even for that we need assumptions. The relevance to the latest developments in data science is obvious, given all the hype around real world data, artificial intelligence and machine learning.
I absolutely loved reading this paper and wholeheartedly recommend it for any health economist. It’s a must read!
Credits