## Factor Analysis

**Exploratory Factor Analysis (EFA)**

EFA was conducted with IBM SPSS to reduce the initial survey items to an underlying simple structure that best defines the capacity for interprofessional collaboration. An initial EFA with maximum likelihood extraction and oblimin rotation was conducted to assess the most suitable process for developing a measurement model. Correlations >.32 indicated the dataset would benefit from oblique rotation with promax rotation.

Next, the dataset was assessed for suitability for factor analysis. **Kaiser-Meyer-Olkin (KMO)** values between 0.7 and 0.8 are good (Hair et al., 2010). The computed KMO was 0.770 indicating that the sampling was adequate. Bartlett's Test of Sphericity (χ2 [210] = 4209.42, p < .001), indicated that factor analysis was suitable for this dataset. Several iterations using maximum likelihood, promax rotation, and extraction with Kaiser criterion of eigenvalues >1 produced a model of 6 factors comprised of 21 survey items.

The **Total Variance Explained** is essentially made of three sources: common variance, unique variance, and error variances (Hair et al., 2010). The results from the survey produced a model that was able to account for 66.28% of these forms of variance across six factors.

For social science research, typical values of communalities range from .40 to .70 (Costello & Osborne, 2005). The lowest item in the table was removed initially during EFA; however, it was later retained as it helped stabilize one of the resulting factors. Question items with a communality of less than .20 were removed (Child, 2006). While the margin for the remaining communality (.257) is thin, it does exceed this lower limit and, together with the large sample size, provide valuable structural integrity (Bollen, 2002; Hair et al., 2010).

**Cronbach Alpha ** values between .60 and .70 are the lower limit of the acceptable range, and .90 is the highest threshold (Hair et al., 2010; Streiner, 2003). The Cronbach Alpha for this instrument is .736, which is sufficient.

**Confirmatory Factor Analysis (CFA)**

CFA was conducted to verify if the simple structure that emerged in EFA is a valid measure of the constructs of interprofessional collaboration (Brown, 2015; Hair et al., 2010). In addition, the aim of CFA is to verify the exploratory relationship between observed variables and the latent constructs (Brown, 2015).

**Construct validity** comprises both convergent validity and discriminant validity (Abell et al., 2009). Table 1 lists the components of construct validity that, when viewed together, verify that construct validity was achieved.

**Convergent validity** is achieved when the factor is represented by the question items more than the errors (Hair et al., 2010). The **Average Variance Extracted (AVE)** must be at least .50 and lower than **Composite Reliability (CR)** to achieve convergent validity. The AVE and CR values are each within an acceptable range satisfying convergent validity (Hair et al., 2010). The AVE measures how much variance the construct captures versus the measurement error variance (Fornell & Larcker, 1981). This procedure was completed for each of the six factors. The AVE values ranged from 0.502 to 0.727. The structure matrix produced the best factor organization and validity of the model fit. This selection works well with the maximum likelihood extraction method used for **Structural Equation Modeling (SEM)**, which supports the model's validity. AVE values greater than .50 are considered adequate (Fornell & Larcker, 1981). The CR is a measurement of internal consistency that detects the reliability of the latent construct and has a threshold floor of 0.70 (Fornell & Larcker, 1981; Hair et al., 2010). This configuration is acceptable since the computed CR values for each factor ranged from 0.739 to 0.851.

**Discriminant validity** is reached when the factors are distinct from other factors (Hair et al., 2010). Practically, this is accomplished when the **Maximum Shared Variance (MSV)** and the **Average Shared Squared Variance (ASV)** are both lower than the AVE for all factors (Hair et al., 2010). Results in table 1 confirm construct validity as measured by both convergent and discriminant validity.

**Structured Equation Modeling (SEM)**

SEM provides the means to test theoretical relationships with empirical data and confirm the model fit (Blunch, 2012; Hair et al., 2010). SPSS Analysis of Moment Structures (AMOS) was used to facilitate SEM and the assessment of model fit (Blunch, 2012).

A range of indices, such as **Chi-Squared/Degrees of Freedom**, **Comparative Fit Index (CFI)**, and **Root Mean Square Error of Approximation (RMSEA)**, were used to assess model fit (Hair et al., 2010; Jaccard & Wan, 1996; Marsh et al., 1996). The CHIC scale demonstrated good fit, X2/df = 2.145, below the suggested index of 3 for adequate fit (Kline, 1998). CFI values >.9 indicate acceptable fit (Bentler, 1990; Hair et al., 2010) with those >.95 preferred (Bentler, 1990; Bollen, 1990). The CFI computed for this model was 0.952, demonstrating the CHIC scale has good fit and represents the data well. The CFI is a goodness of fit measure, while the RMSEA measures the badness of fit with values between .03 and .08 as adequate (Hair et al., 2010). The CHIC scale returned a value of 0.046.