### What is Bifactor Analysis?

Bifactor analysis is a versatile tool that allows researchers to answer a host of crucial questions about the social scientific instruments they use in their research.

On November 1, 2016 at the University of Kentucky, I presented a 50-minute talk with Dr. Michael Toland on **Bifactor Analysis in M plus**. We walked the audience through what a bifactor model is, what follow-up ancillary bifactor measures are, what questions can be answered by using these techniques, how to run everything using

*M*plus, how to interpret results from these techniques, and what conclusions you can draw from these results.

We used a real life example from our 2016 *Stigma and Health* paper on the bifactor analysis of the Internalized Stigma of Mental Illness Scale. You can download the PowerPoint Slideshow as well as sample data and M*plus *syntax we used for this talk, so that you can replicate our results. You are encouraged to adapt this syntax for your own confirmatory bifactor analyses and ancillary bifactor measure calculations, but please cite us:

Hammer, J. H., & Toland, M. D. (2016, November). *Bifactor analysis in Mplus*. [Video file]. Retrieved from http://sites.education.uky.edu/apslab/upcoming-events/

Here’s the **video of our talk**, which included Q&A at the end:

### Bifactor-Related Calculators

Here are some useful bifactor-related calculators and resources. I discuss the PUC and ARPB Calculators in my Bifactor Analysis in M*plus* talk.

**Bifactor Indices Calculator** – This comprehensive Microsoft Excel-based calculator can be used to calculate IECV, Relative Parameter Bias, Absolute Relative Parameter Bias, IECV, ECV, Omega, OmegaH, OmegaHS, Relative Omega (PRV), and H, as well as model-level PUC, ECV, and Average Relative Parameter Bias. I recommend double-checking results from this calculator with results calculated via M*plus* syntax and/or the specific calculators below.

**Construct Replicability (H index) Calculator **– This Microsoft Excel-based calculator can be used to calculate the H index. Construct Replicability, as measured by the H index of Hancock and Mueller (2001), is a statistical approach to evaluate how well a set of items represent a latent variable. It provides information on whether the SEM measurement model is suitable and replicable across studies. High H values (>.80) suggest a well-defined latent variable, which is more likely to be stable across studies. (Description adapted from p. 230; Rodriguez et al., 2016).

**Percent of Uncontaminated Correlations (PUC) Calculator** – This Microsoft Excel-based calculator can be used to calculate the PUC statistic. “A concern researchers often have pertains to the biasing effects of forcing a unidimensional model to multidimensional data. When this is at issue, important diagnostic information can be derived from examining both ECV and PUC. Reise et al., (2013) and Bonifay et al. (2015) demonstrated that parameter bias is directly related to ECV, which, in turn, is moderated by the PUC… Simply stated, the research cited earlier indicates that as PUC increases, the magnitude of the ECV value becomes less and less important in determining the potential for bias when a unidimensional model is fit to multidimensional data.” (from p. 231-232; Rodriguez et al., 2016). When Percent of Uncontaminated Correlations (PUC) values are higher than .80, general ECV values are less important in predicting bias; when PUC values are lower than .80, general ECV values greater than .60 and ωH > .70 suggest that the presence of some multidimensionality is not severe enough to disqualify the interpretation of the instrument as primarily unidimensional (p. 22, Reise, Scheines, Widaman, & Haviland, 2013). Rodriguez et al., (2016) says on page 232 that “when ECV is > .70 and PUC > .70, relative bias will be slight, and the common variance can be regarded as essentially unidimensional”.

**Average Relative Parameter Bias (ARPB) Calculator** – This Microsoft Excel-based calculator can be used to calculate the ARPB statistic. Rodriguez, Reise, & Haviland (2016) say the following about the ARPB: “We then computed the relative parameter bias as the difference between an item’s loading in the unidimensional solution and its general factor loading in the bifactor (i.e., the truer model), divided by the general factor loading in the bifactor. We found that the average relative bias across items was 2%. According to Muthén, Kaplan, and Hollis (1987), parameter bias less than 10-15% is acceptable and poses no serious concern.” (p. 145).

### Key Bifactor References

Reise, S. P. (2012). The rediscovery of bifactor measurement models. *Multivariate Behavioral Research, 47*, 667-696. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3773879/

Reise, S. P., Moore, T. N., & Haviland, M. G. (2010). Bifactor models and rotations: Exploring the extent to which multidimensional data yield univocal scale scores. *Journal of Personality Assessment, 92, *544–559. https://www.ncbi.nlm.nih.gov/pubmed/20954056

Reise, S. P., Scheines, R., Widaman, K. F., & Haviland, M. G. (2013). Multidimensionality and structural coefficient bias in structural equation modeling: A bifactor perspective. *Educational and Psychological Measurement, 73*(1), 5–26. http://epm.sagepub.com/content/early/2012/07/16/0013164412449831

Reise, S. P., Bonifay, W. E., & Haviland, M. G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95, 129-140. https://www.ncbi.nlm.nih.gov/pubmed/23030794

Rodriguez, A., Reise, S. P., & Haviland, M. G. (2016). Evaluating bifactor models: Calculating and interpreting statistical indices. Psychological Methods, 21, 137-150. doi: 10.1037/met0000045 https://www.ncbi.nlm.nih.gov/pubmed/26523435

Rodriguez, A., Reise, S. P., & Haviland, M. G. (2016). Applying bifactor statistical indices in the evaluation of psychological measures. Journal of Personality Assessment, 98, 223-237. doi: 10.1080/00223891.2015.1089249 https://www.ncbi.nlm.nih.gov/pubmed/26514921

Stucky, B. D., Edelen, M. O., Vaughan, C. A., Tucker, J. S., & Butler, J. (2014). The Psychometric Development and Initial Validation of the DCI-A Short Form for Adolescent Therapeutic Community Treatment Process. *Journal of Substance Abuse Treatment*, *46*, 516–521. https://www.ncbi.nlm.nih.gov/pubmed/24462245

Stucky, B. D., & Edelen, M. O. (2014). Using hierarchical IRT models to create unidimensional measures from multidimensional data. In S. P. Reise & D. A. Revicki (Eds.), *Handbook of item response theory modeling: Applications to typical performance assessment* (183-206). London, UK: Taylor & Francis. https://www.routledgehandbooks.com/doi/10.4324/9781315736013.ch9

### Bifactor Consultation services

Visit my consultation page to learn more about the consultation services I provide related to bifactor analysis and scale development.