I provide consultation in my areas of expertise to individuals, research teams, and organizations.  This expertise includes:

  • Career exploration (how to help people figure out what occupation and education/training is right for them)
  • Help-seeking (treatment utilization, stigma, barriers to care, healthcare access, psychosocial factors)
  • Factor analysis (EFA, CFA, bifactor)
  • Structural equation modeling (SEM, measurement equivalence/invariance testing)
  • Scale development, refinement, shortening, and validation
  • Psychometrics (reliability, validity, utility)
  • Survey design and administration (study advertising and data collection via the internet, ResearchMatch.org, Mechanical Turk, SONA; use of REDCap and Qualtrics survey platforms)

I am particularly interested in providing this expertise in the context of interprofessional, extramurally-funded research.  Contact me and we can discuss the possibilities.

Below, I provide additional detail about three high-demand consultation topics: career exploration, instrument development,  and bifactor analysis.

Career Exploration

Most of us are never taught how to systematically explore career paths so that we can pick the one that fits our interests, talents, and life situation.  My profession of counseling psychology has been a historical and modern leader in using evidence-based strategies for helping people figure out what occupation and education/training is the best fit for them.  My professional training and experience have culminated in the development of the Systematic Career Exploration Approach (SCEA) ©.  The SCEA is a self-guided version of the professionally-guided approach that I’ve used to help hundreds of people explore career paths.  I bring this experience to bear by providing consultation on how to facilitate career exploration.

Instrument Development

It is easy to create a set of items, call it a scale, and add the scores of all items together to create a total score.  It is much harder to create an instrument that provides reliable and valid scores that truly measure the thing you are trying to measure.

This is why there is a whole field called Psychometrics, which is dedicated to the objective measurement of knowledge, abilities, attitudes, personality traits, educational achievement, and other psychological constructs.  My program of research draws upon best practices in psychometrics to develop and refine instruments measuring everything from stigma to spirituality.  Having learned a good deal about the do’s and don’ts of measurement, I now offer consultation in this area.

Here is a list of situations in which you may find it helpful to enlist my expertise.  You are trying to…

  • determine whether existing instruments are “valid and reliable enough” to use, or whether a new, better instrument should be created
  • develop a new multi-item instrument grounded in extant research and theory
  • develop a robust item pool t that measures all aspects of the construct’s content domain
  • pick a response format (e.g., dichotomous, Likert, semantic differential) and the optimal number of response options
  • solicit expert feedback to establish content evidence of validity
  • pilot test instrument with population of interest to ensure respondents can clearly comprehend the instructions, format, and item content
  • figure out how many factors the instrument has
  • determine whether the construct measured by the instrument is truly multidimensional, or just multifaceted and “essentially unidimensional” (in which case bifactor methods are needed; see Bifactor Analysis section below)
  • select the best items to measure each factor, while balancing reliability, validity, and brevity considerations
  • investigate the internal consistency (e.g., Cronbach’s Alpha, Omega) and test-retest reliability of the scores
  • provide evidence that your subscale scores are uniquely reliable and provide added value beyond the total score
  • use extant literature to articulate a nomological net for the construct the instrument is designed to measure and to generate testable hypotheses regarding the instrument scores’ relationship with theoretically-related variables
  • provide concurrent, predictive, incremental, convergent, or divergent evidence of validity for the instrument’s scores
  • justify scale development decisions in response to reviewer criticism
  • create a “short form” of an instrument that appropriately measures the construct, but with fewer items to reduce participant response burden
  • explore the possibility of developing a unidimensional short form of an originally-multidimensional instrument

Bifactor Analysis

Bifactor analysis is a versatile tool that allows researchers to answer a host of crucial questions about the social scientific instruments they use in their research.  Watch the video of my invited presentation joe-hammer-bifactor-in-mplus-talkon Bifactor Analysis in Mplus to learn more.

Most of my current consultation center around the use of bifactor analysis and ancillary bifactor measures (e.g., Explained Common Variance, Omegas) to develop, shorten, and improve psychological scales (i.e., instruments, tests, questionnaires).  Here’s an example of how I can provide bifactor consultation.

A group of researchers are attempting to develop a self-report scale to measure a psychological construct.  Many times, researchers and clinicians want multidimensional scales that have multiple subscales measuring facets of the larger construct, as well as a total scale score that measures the overall construct.  However, modern standards in measurement (see the updated 2014 edition of The Standards for Educational and Psychological Testing for an excellent overview) require scale developers to provide evidence for the reliability and validity of subscale scores and total scores.  In my role as consultant, I proceed to investigate the dimensionality (i.e., factor structure, internal structure) of their instrument, determine the reliability and validity of the various scores, and offer suggestions about how to strengthen the instrument.  To do so, I use a combination of the following techniques:

  • Exploratory Factor Analysis (both traditional and bifactor), guided by Parallel Analysis
  • Confirmatory Factor Analysis, which involved comparing unidmensional, correlated factors (e.g., two-factor oblique), second order (i.e., higher order), and bifactor models to determine which solution best represents the covariation across the scale items.  Given that the instrument best conformed to a bifactor structure, I proceeded to draw upon the ancillary bifactor measures (see next).
  • Explained Common Variance (ECV), Percent of Uncontaminated Correlations (PUC), Individual Explained Common Variance (IECV), general vs. specific loadings, and Average Relative Parameter Bias (ARPB), to help clarify the unidimensionality versus multidimensionality of the instrument, which influences what factor structure is appropriate for modeling the instrument and can foreshadow the potential utility of total and subscale scores.
  • Omega (ω), Omega Hierarchical (ωH), Omega Hierarchical Subscale (ωHS), and Percentage of Reliable Variance (PRV) to help determine the model-based reliability of the group and general factors, which has a direct bearing on whether or not it is appropriate to calculate and interpret raw subscale and/or total scores for the instrument.  Model-based reliability statistics offer distinct advantages over traditional internal consistency coefficients such as Cronbach’s alpha, particularly when multidimensional instruments are involved.
  • Concurrent/convergent/incremental evidence of validity analyses: by modeling the instrument within a bifactor CFA framework and examining the unique relationships between the general and specific latent factors and various criterion variables, I was able to further determine which latent factors accounted for unique variance in theoretically-related criterion variables, which has a direct bearing on the validity/utility of these factors.  For example, if a particular specific factor (akin to a subscale score) is unable to account for variance in theoretically-related variables beyond the variance accounted for by the general factor (akin to a total score), is that specific factor worth studying or targeting with interventions?

Through the combined use of these techniques, I generate and describe initial evidence for the internal structure of the instrument and the reliability and validity of the total score.  I determine that the subscales offer little in the way of incremental utility over the total score, and proceed to develop a short form version of the instrument (using IECVs) that retained coverage of the content domain (i.e., content validity) but significantly reduced participant response burden (i.e., the time it takes to complete an instrument), which is essential for feasible use in clinical practice.  The short form demonstrates a cleanly unidimensional structure, when examined via factor analysis.

My use of bifactor modeling and ancillary bifactor measures is demonstrated in these peer-reviewed publications:

Hammer, J. H., & McDermott, R. C., Levant, R. F., & McKelvey, D. K. (in press). Dimensionality, Reliability, and Validity of the Gender Role Conflict Scale – Short Form (GRCS-SF). Psychology of Men and Masculinity. DOI forthcoming. *PDF*

Hammer, J. H., & Lazar, A. (in press). Internal Structure and criterion relationships for long and brief versions of the Intratextual Fundamentalism Scale (IFS) among Israeli Jews. Psychology of Religion and Spirituality. DOI forthcoming *PDF*

Hammer, J. H., & Brenner, R. E. (in press). Disentangling gratitude: A theoretical and psychometric examination of the Gratitude Resentment and Appreciation Test- Revised Short (GRAT-RS). Journal of Personality Assessment. doi: 10.1080/00223891.2017.1344986 *PDF*

McDermott, R., Levant, R. F., Hammer, J. H., Hall, R., McKelvey, D., & Jones, Z. (in press). Further examination of the factor structure of the Male Role Norms Inventory-Short Form (MRNI-SF): Measurement considerations for women, men of color, and gay men. Journal of Counseling Psychology. doi: 10.1037/cou0000225 *PDF*

Hammer, J. H., & Toland, M. D. (in press). Internal Structure and Reliability of the Internalized Stigma of Mental Illness Scale (ISMI-29) and Brief Versions (ISMI-10, ISMI-9) among Americans with Depression. Stigma and Health. doi: 10.1037/sah0000049 *PDF*

Hammer, J. H., & Vogel, D. V. (in press). Development of the Help-Seeker Stereotype Scale. Stigma and Health. doi: 10.1037/sah0000048 *PDF*

Brewster, M. E., Hammer, J. H., Sawyer, J., Eklund, A., & Palamar, J. (in press). Perceived Experiences of atheist discrimination: Instrument development and evaluation. Journal of Counseling Psychology. doi: 10.1037/cou0000156 *PDF*