top of page

Critical appraisal and reporting 

  • Appraisal, reporting and dissemination of findings are important aspects of IPD meta-analysis projects.

​

  • It is important to examine the robustness of IPD meta-analysis results to potential biases that could occur.

  • Reporting guidelines should be followed but dissemination needs to be be broader than just a journal publication.

  • Existing IPD meta-analysis should be appraised for their quality.

​

​

​

Examining potential bias in IPD meta-analysis results

​

  • Publication-related biases hide relevant trials and data, often those with ‘negative’ findings (e.g. statistically non-significant results). As for any type of review, this could lead to IPD meta-analysis results being biased toward favourable effects.

​

  • Availability bias is a concern if IPD are obtained from only a subset of the trials from which requested, and the provision of IPD is linked to trial findings. This may also make the IPD meta-analysis results biased, although the direction of bias is hard to predict.

​

  • Selective outcome availability may also occur, if trials that provide their IPD do not give all the outcomes that were actually recorded, potentially leading to biased IPD meta-analysis results for some outcomes. Multivariate meta-analysis may help reduce this issue.

​

  • These issues may lead to small-study effects in the IPD meta-analysis, where smaller trials exhibit different (often greater) effect estimates than larger trials.

​

  • Small-study effects may be examined visually using a funnel plot, which displays study effect estimates (x-axis) against some measure of their precision (y-axis), e.g. standard error of the treatment effect. Small-study effects are revealed by asymmetry in the plot.

  • Small-study effects may also be due to factors that cause between-trial heterogeneity; for example, if smaller trials are conducted in populations and settings that genuinely lead to larger effect estimates. So evidence of asymmetry does not prove that publication, availability and/or selective reporting biases exist.

​

  • The impact of availability bias can be investigated by utilising aggregate data from non-IPD trials in sensitivity analyses. IPD and aggregate data are then combined and the obtained results compared with the main IPD meta-analysis results.

  • However, obtaining suitable aggregate data may be problematic, and the process may simply reinforce why IPD was required in the first place.

​

  • Other sensitivity analyses may be needed to examine bias concerns. In particular, analyses restricted to trials at low risk of bias investigate whether the meta-analysis conclusions are influenced by trial quality.

Biases

 Reporting and dissemination of IPD meta-analyses

​

  • To maximise uptake, impact and usefulness, IPD meta-analysis projects must be carried out and reported well.

​

  • Reporting and dissemination activity should be planned from the outset, with a broad range of potential stakeholders in mind, including patients and policy-makers.

​

  • Owing to size and complexity, communicating the methods, results and implications of an IPD meta-analysis project can be challenging.

  • Methods and results should be reported transparently and in sufficient detail to allow readers to judge the research quality, and consequently the credibility of findings.

​

  • PRISMA-IPD, and its associated checklist and flow diagram, provide a framework to help authors describe essential elements of IPD meta-analysis design, conduct and findings in their journal article.

​

  • Disseminating and mobilising the knowledge gained from IPD meta-analysis can benefit from additional activity over and above publishing in academic journals.

​

  • Different types of outputs can be prepared and tailored for different audiences, including practitioners, guideline developers, policy makers, patient advocacy and support groups, and members of the public.

​

  • Effective use of social media, blogs, videos, press releases and lay summaries can enhance communication and reach a broader audience.

Reporting

 Critical appraisal of existing IPD meta-analysis projects

​

  • Evidence suggests that not all published IPD meta-analysis projects are completed to the same standard.

​

  • Collecting, checking and synthesising IPD is more complex than for conventional meta-analyses that use existing aggregate data, with a myriad of potential choices and issues to address including those relating to data collection and integrity, bias assessment and statistical modelling.

​

  • This complexity can make it challenging for researchers, clinicians, patients, policy makers, funders and publishers to judge the quality of planned or completed IPD meta-analysis projects.

​

  • CheckMAP is a tool for critically appraising a completed IPD meta-analysis project evaluating the effects of a treatment.

  • The tool enables stakeholders to recognise a high quality IPD meta-analysis project, to help ensure that the most robust IPD meta-analysis findings are used to inform policy, practice and subsequent research.

Critical appraisal
bottom of page