Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
24 views2 pages

Tripod Cluster Checklist

The document outlines the TRIPOD-Cluster checklist, which details essential items to include when reporting studies that develop or validate multivariable prediction models using clustered data. It covers various sections such as title and abstract, introduction, methods, results, discussion, and other information, specifying the necessary components for each section. This checklist aims to enhance transparency and consistency in reporting prediction models in medical research.

Uploaded by

azzah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views2 pages

Tripod Cluster Checklist

The document outlines the TRIPOD-Cluster checklist, which details essential items to include when reporting studies that develop or validate multivariable prediction models using clustered data. It covers various sections such as title and abstract, introduction, methods, results, discussion, and other information, specifying the necessary components for each section. This checklist aims to enhance transparency and consistency in reporting prediction models in medical research.

Uploaded by

azzah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Supplementary table TRIPOD-Cluster checklist of items to include when reporting a study

developing or validating a multivariable prediction model using clustered data


# Description Page
#
Title and abstract
1 Identify the study as developing and/or validating a multivariable prediction model, the target population,
and the outcome to be predicted.
2 Provide a summary of research objectives, setting, participants, data source, sample size, predictors,
outcome, statistical analysis, results, and conclusions. *
Introduction
3a Explain the medical context (including whether diagnostic or prognostic) and rationale for developing or
validating the prediction model, including references to existing models, and the advantages of the study
design.*
3b Specify the objectives, including whether the study describes the development or validation of the
model.*
Methods
4a Describe eligibility criteria for participants and datasets.*
4b Describe the origin of the data, and how the data were identified, requested, and collected.
5 Explain how the sample size was arrived at.*
6a Define the outcome that is predicted by the model, including how and when assessed. *
6b Define all predictors used in developing or validating the model, including how and when measured. *
7a Describe how the data were prepared for analysis, including any cleaning, harmonisation, linkage, and
quality checks.
7b Describe the method for assessing risk of bias and applicability in the individual clusters (eg, using
PROBAST).
7c For validation, identify any differences in definition and measurement from the development data (eg,
setting, eligibility criteria, outcome, predictors).*
7d Describe how missing data were handled.*
8a Describe how predictors were handled in the analyses.
8b Specify the type of model, all model-building procedures (eg, any predictor selection and penalisation),
and method for validation.*
8c Describe how any heterogeneity across clusters (eg, studies or settings) in model parameter values was
handled.
8d For validation, describe how the predictions were calculated.
8e Specify all measures used to assess model performance (eg, calibration, discrimination, and decision
curve analysis) and, if relevant, to compare multiple models.
8f Describe how any heterogeneity across clusters (eg, studies or settings) in model performance was
handled and quantified.
8g Describe any model updating (eg, recalibration) arising from the validation, either overall or for
particular populations or settings.*
9 Describe any planned subgroup or sensitivity analysis, (eg, assessing performance according to sources of
bias, participant characteristics, setting).
Results
10a Describe the number of clusters and participants from data identified through to data analysed. A flow
chart may be helpful.*
10b Report the characteristics overall and where applicable for each data source or setting, including the key
dates, predictors, treatments received, sample size, number of outcome events, follow-up time, and
amount of missing data.*
10c For validation, show a comparison with the development data of the distribution of important variables
(demographics, predictors, and outcome).
11 Report the results of the risk of bias assessment in the individual clusters.
12a Report the results of any across-cluster heterogeneity assessments that led to subsequent actions during
the model’s development (eg, inclusion or exclusion of particular predictors or clusters).
12b Present the final prediction model (ie, all regression coefficients, and model intercept or baseline estimate
of the outcome at a given time point) and explain how to use it for predictions in new individuals. *
13a Report performance measures (with uncertainty intervals) for the prediction model, overall and for each
cluster.

Page 1 of 2
13b Report results of any heterogeneity across clusters in model performance.
14 Report the results from any model updating (including the updated model equation and subsequent
performance), overall and for each cluster.*
15 Report results from any subgroup or sensitivity analysis.
Discussion
16a Give an overall interpretation of the main results, including heterogeneity across clusters in model
performance, in the context of the objectives and previous studies. *
16b For validation, discuss the results with reference to the model performance in the development data, and
in any previous validations.
16c Discuss the strengths of the study and any limitations (eg, missing or incomplete data, non-
representativeness, data harmonisation problems).*
17 Discuss the potential use of the model and implications for future research, with specific view to
generalisability and applicability of the model across different settings or (sub)populations.*
Other information
18 Provide information about the availability of supplementary resources (eg, study protocol, analysis code,
datasets).*
19 Give the source of funding and the role of the funders for the present study.
This checklist is taken from Debray TPA, Collins GS, Riley RD et al. Transparent reporting of multivariable prediction
models developed or validated using clustered data: TRIPOD-Cluster checklist. BMJ 2022;378:e071018; doi:10.1136/bmj-
2022-071018.
PROBAST=prediction model risk-of-bias assessment tool.
*Item text is an adaptation of one or more existing items from the original TRIPOD (transparent reporting of a multivariable
prediction model for individual prognosis or diagnosis) checklist.

Page 2 of 2

You might also like