Browsing by Author "Van der Bank, Sonja"
Now showing 1 - 1 of 1
Results Per Page
- ItemAn investigation into the measurement invariance and measurement equivalence of the South African personality inventory across gender groups in South Africa(Stellenbosch : Stellenbosch University, 2019-12) Van der Bank, Sonja; Theron, Callie C.; Stellenbosch University. Faculty of Economic and Management Sciences. Dept. of Industrial Psychology.ENGLISH SUMMARY : Personality assessments are commonly used as predictor measures in employment selection due to substantial empirical evidence proving that personality constructs explain and predict employee performance and behaviour in organisational settings. Before conclusions can be made that inter-group differences in observed scores are caused by valid cross-group differences in the latent personality variables being assessed, the possibility of measurement bias being the cause must be nullified. Measurement bias refers to group-related error in the measurement of a specific construct carrying a specific constitutive definition. In this sense measurement bias refers to two hierarchically related questions, namely (a) whether the same construct, carrying a specific constitutive definition, is measured across groups, and if so (b) whether the same construct is measured in the same way across groups (i.e. whether a specific standing on the latent variable being assessed is associated with the same expected observed score or probability of achieving a specific observed score across groups). Measurement bias comprises method bias, construct bias and item bias. The current study utilised a stringent definition of item bias that states that item bias occurs if the regression of observed item responses on the underlying latent dimension the item is designated to reflect, differs in terms of intercept (uniform bias), and/or slope (non-uniform bias) and/or error variance (error variance bias) across groups. When conceptualising measurement bias from the perspective of mean and covariance structure (MACS) analysis, the terms measurement invariance and measurement equivalence are typically used. Both measurement invariance and equivalence pertains to the question whether the slope, intercept or error variance of the regression of the item responses on the latent personality dimensions being measured differ across groups. Dunbar et al. (2011) proposed a clear distinction between measurement invariance and measurement equivalence. Measurement invariance investigates whether a multigroup measurement model in which the factor structure (i.e. number of personality factors and the items’ loading pattern on the factors) is constrained to be identical across multiple groups and in which (a) no parameters are constrained to be equal across the groups, (b) some parameters are constrained to be equal across the groups, fits the data obtained from two or more samples closely (Dunbar et al., 2011). The five hierarchical levels of measurement invariance include configural invariance, weak invariance, strong invariance, strict invariance and complete invariance (Dunbar et al., 2011). Measurement equivalence, investigates whether a multigroup measurement model in which the structure but no parameters is constrained to be equal across groups fits the data of multiple groups significantly better than a multigroup measurement model in which the structure and specific parameters are constrained to be equal across groups. Dunbar et al. (2011) also proposed four hierarchical levels of measurement equivalence, namely metric equivalence, scalar equivalence, conditional probability equivalence and full equivalence The current study investigates the measurement invariance and measurement equivalence of the South African Personality Inventory (SAPI) across gender groups in South Africa. The SAPI demonstrated a lack of construct bias and a lack of non-uniform bias. The SAPI measured the same construct across the two samples groups, but the item content of some items were perceived and interpreted differently between the two gender groups. Metric – partial scalar - partial conditional probability equivalence was demonstrated. Consequential implications and recommendations relating to the study findings for the test developers and human resource practitioners are discussed.