Video Games as Stealth Assessments
Examining the effectiveness of game-based models as a basis for emerging psychoeducational assessments.
The current pilot study examined how well a reflective moral-choice video game predicted the rating scale scores of aggression types. To begin, the authors used a coding system to examine in-game proactive and reactive behaviors. This analysis resulted in a tallied score for each construct. These game-based scores were then included in regression models, examining how well within-game behaviors predict scores on a pre-existing rating scale of both proactive and reactive aggression. Findings indicated that game-based proactive scores were not predictive of proactive aggression ratings; however, reactive game-based scores were predictive of reactive aggression ratings. Implications for these findings are discussed.
Play & Learning in Online Arena Battle Games
Predictive modeling of performance outcomes in League of Legends.
Researchers have called for additional empirical studies associated with video games. However, every game is novel in terms of mechanics, content, context, and agency; known variables may be operationalized differently based on the game involved. It is incumbent upon researchers to leverage or create the best tools when extracting data from games. This article models instrument development of a scale intended to catalog users’ actions in a novel context (i.e., the game League of Legends) using a mixed methods approach. Specifically, this work outlines data collection and validation strategies using an online social news aggregation, rating, and discussion resource (i.e., Reddit), involving multiple cycles to elicit expert input and queries to generate consensus from expert gamers, followed by analysis of responses and an exploratory factor analysis for scale construction. Reddit provided three unique functions relative to the process: (a) access to experts who informed the development of scale items, (b) a social space and mechanism to validate scale items, and (c) the opportunity to capture data necessary to establish the psychometric properties of the instrument. Findings associated with scale development (i.e., item generation, theoretical and psychometric analyses) are presented. Overall, implications for instrument development in continually evolving contexts are discussed.
Psychometrics of Technology-Based Assessments
Comparative analysis of paper and pencil versus computerized assessment
Pearson now uses a technology‐based testing platform, Q‐Interactive, to administer tests previously available in paper versions. The same norms are used for both versions; Pearson’s in‐house equivalency studies indicated that both versions are equated. The goal of the current study is to independently evaluate equivalency findings. For the current study, equivalency was measured using the three‐part test set forth by American Psychological Association in 1986. First, the researchers examined rank order similarity; then, they examined mean score similarity; and finally, they examined score‐distribution similarity. One of these equivalency standards (rank order similarity) was not met, and one other standard is debatable (mean score similarity); therefore, the authors noted concerns about the use of Peabody Picture Vocabulary Test, Fourth Edition Q‐Interactive for preschoolers. New normative data should be collected. .