Bookshelf

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Macdonald G, Livingstone N, Hanratty J, et al. The effectiveness, acceptability and cost-effectiveness of psychosocial interventions for maltreated children and adolescents: an evidence synthesis. Southampton (UK): NIHR Journals Library; 2016 Sep. (Health Technology Assessment, No. 20.69.)

Cover of The effectiveness, acceptability and cost-effectiveness of psychosocial interventions for maltreated children and adolescents: an evidence synthesis

The effectiveness, acceptability and cost-effectiveness of psychosocial interventions for maltreated children and adolescents: an evidence synthesis.

Health Technology Assessment, No. 20.69. Macdonald G, Livingstone N, Hanratty J, et al. Southampton (UK): NIHR Journals Library; 2016 Sep.

Appendix 8 Checklist: quality of data within economic evaluations

Drummond’s checklist for the critical appraisal of economic evaluations

1 Was a well-defined question posed in answerable form?

1.1 Did the study examine both costs and effects of the service(s) or programme(s)?

1.2 Did the study involve a comparison of alternatives?

1.3 Was a viewpoint for the analysis stated and was the study placed in any particular decision-making context?

2 Was a comprehensive description of the competing alternatives given (i.e. can you tell who, did what, to whom, where and how often)?

2.1 Were any important alternatives omitted?

2.2 Was (Should) a do-nothing alternative (be) considered?

3 Was the effectiveness of the programmes or services established?

3.1 Was this done through a randomised controlled clinical trial? If so, did the study protocol reflect what would happen in regular practice?

3.2 Was effectiveness established through an overview of clinical studies (systematic review/meta-analysis)?

3.3 Were observational data or assumptions used to establish effectiveness? If so, what are the potential biases in results?

4 Were all the important and relevant costs and consequences for each alternative identified?

4.1 Was the range wide enough for the research question at hand?

4.2 Did it cover all relevant viewpoints? (possible viewpoints include the community or social viewpoint, and those of patients and third-party payers)

4.3 Were capital costs, as well as operating costs, included?

5 Were costs and consequences measured accurately in appropriate physical units (e.g. hours of nursing time, number of physician visits, lost work days, gained life-years)?

5.1 Were any of the identified items omitted from measurement? If so, does this mean that they carried no weight in the subsequent analysis?

5.2 Were there any special circumstances (e.g. joint use of resources) that made measurement difficult? Were these circumstances handled appropriately?

6 Were costs and consequences valued credibly?

6.1 Were the sources of all values clearly identified? (possible sources include market values, patient preferences and views, policy-makers’ views and health professionals’ judgements)

6.2 Were market values employed for changes involving resources gained or depleted?

6.3 Where market values were absent (e.g. volunteer labour), or market values did not reflect actual values (such as clinic space donated at a reduced rate), were adjustments made to approximate market values?

6.4 Was the valuation of consequences appropriate for the question posed (i.e. has the appropriate type or types of analysis – cost-effectiveness, cost–benefit, cost–utility – been selected)?

7 Were costs and consequences adjusted for differential timing?

7.1 Were costs and consequences that occur in the future ‘discounted’ to their present values?

7.2 Was any justification given for the discount rate used?

8 Was an incremental analysis of costs and consequences of alternatives performed?

8.1 Were the additional (incremental) costs generated by one alternative over another compared with the additional effects, benefits, or utilities generated?

9 Was allowance made for uncertainty in the estimates of costs and consequences?

9.1 If data on costs or consequences were stochastic, were appropriate statistical analyses performed?

9.2 If a sensitivity analysis was employed, was justification provided for the ranges of values (for key study parameters)?

9.3 Were study results sensitive to changes in the values (within the assumed range for sensitivity analysis, or within the CI around the ratio of costs to outcomes)?

10 Did the presentation and discussion of study results include all issues of concern to users?

10.1 Were the conclusions of the analysis based on some overall index or ratio of costs to consequences (e.g. cost-effectiveness ratio)? If so, was the index interpreted intelligently or in a mechanistic fashion?

10.2 Were the results compared with those of others who have investigated the same question? If so, were allowances made for potential differences in study methodology?

10.3 Did the study discuss the generalisability (external validity) of the results to other settings and patient/client groups?

10.4 Did the study allude to, or take account of, other important factors in the choice or decision under consideration (e.g. distribution of costs and consequences, or ethical issues)?

10.5 Did the study discuss issues of implementation, such as the feasibility of adopting the ‘preferred’ programme given existing financial or other constraints, and whether any freed resources could be redeployed to other worthwhile programmes?

Copyright © Queen’s Printer and Controller of HMSO 2016. This work was produced by Macdonald et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.