Skip to main navigation Skip to search Skip to main content

Psychometric performance of the Mental Health Implementation Science Tools (mhIST) across six low- and middle-income countries

  • Luke R. Aldridge
  • , Christopher G. Kemp
  • , Judith K. Bass
  • , Kristen Danforth
  • , Jeremy C. Kane
  • , Syed U. Hamdani
  • , Lisa A. Marsch
  • , José M. Uribe-Restrepo
  • , Amanda J. Nguyen
  • , Paul A. Bolton
  • , Laura K. Murray
  • , Emily E. Haroz

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

Background: Existing implementation measures developed in high-income countries may have limited appropriateness for use within low- and middle-income countries (LMIC). In response, researchers at Johns Hopkins University began developing the Mental Health Implementation Science Tools (mhIST) in 2013 to assess priority implementation determinants and outcomes across four key stakeholder groups—consumers, providers, organization leaders, and policy makers—with dedicated versions of scales for each group. These were field tested and refined in several contexts, and criterion validity was established in Ukraine. The Consumer and Provider mhIST have since grown in popularity in mental health research, outpacing psychometric evaluation. Our objective was to establish the cross-context psychometric properties of these versions and inform future revisions. Methods: We compiled secondary data from seven studies across six LMIC—Colombia, Myanmar, Pakistan, Thailand, Ukraine, and Zambia—to evaluate the psychometric performance of the Consumer and Provider mhIST. We used exploratory factor analysis to identify dimensionality, factor structure, and item loadings for each scale within each stakeholder version. We also used alignment analysis (i.e., multi-group confirmatory factor analysis) to estimate measurement invariance and differential item functioning of the Consumer scales across the six countries. Results: All but one scale within the Provider and Consumer versions had Cronbach’s alpha greater than 0.8. Exploratory factor analysis indicated most scales were multidimensional, with factors generally aligning with a priori subscales for the Provider version; the Consumer version has no predefined subscales. Alignment analysis of the Consumer mhIST indicated a range of measurement invariance for scales across settings (R 2 0.46 to 0.77). Several items were identified for potential revision due to participant nonresponse or low or cross- factor loadings. We found only one item, which asked consumers whether their intervention provider was available when needed, to have differential item functioning in both intercept and loading. Conclusion: We provide evidence that the Consumer and Provider versions of the mhIST are internally valid and reliable across diverse contexts and stakeholder groups for mental health research in LMIC. We recommend the instrument be revised based on these analyses and future research examine instrument utility by linking measurement to other outcomes of interest.

Original languageEnglish
Article number54
JournalImplementation Science Communications
Volume3
Issue number1
DOIs
StatePublished - Dec 2022

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Keywords

  • Implementation measurement
  • Low- and middle-income countries
  • Mental health
  • Psychometrics

Fingerprint

Dive into the research topics of 'Psychometric performance of the Mental Health Implementation Science Tools (mhIST) across six low- and middle-income countries'. Together they form a unique fingerprint.

Cite this