If you are having any difficulty using this website, please contact the Help Desk at Help@nullHofstra.edu or 516-463-7777 or Student Access Services at SAS@nullhofstra.edu or 516-463-7075. Please identify the webpage address or URL and the specific problems you have encountered and we will address the issue.

Skip to Main Content

Research Design and Evalutaion

Drs. Deborah Hecht (co-PI) and Bert Flugman at CASE will undertake research and evaluation activities for SMTE. They will attend all meetings and workshops, building on years of collaboration with the co-PIs on NSF projects. CASE was integrally involved in the proposal’s development, helping to ensure logical linkages between activities and expected outcomes. The team will use varied methods for assessing Project and student outcomes, including assessments of student content knowledge, tracking of screen activity, expert reviews, teacher and student feedback, self-efficacy and attitudinal surveys, observations, interviews, and focus groups.

Formative Evaluation. A logic model, in which inputs, processes, outputs, and outcomes are mapped, will guide optimization of Project activities, outcomes, and products. The evaluation team will document and evaluate procedures used to create the storyboards, 3-D simulations, KSBs, and design tasks; and ensure they are aligned with pedagogical goals. To assess whether materials effectively promote learning, link to STL, and provide adequate teacher and student support, developers will be surveyed and interviewed. All materials will be examined by teachers, students, and relevant experts for utility, clarity, and relevance for middle school education. As simulations and design tasks are developed, development team teachers and their students will microtest and pilot them to improve their efficacy and appeal. Findings from focus groups held throughout the Project will further examine specific questions that arise.

Research and Summative Evaluation.The research program, field testing, and the summative evaluation are isomorphic and student assessment data will be collected for three research conditions: 1) the experimental (hybrid) condition, where students participate in the game, simulation, and physical modeling task; 2) the gaming and simulation only condition that involves students in the virtual tasks only; 3) the business as usual condition that will involve students only in the physical modeling task (enriched by KSBs, so that students acquire content knowledge).

Research will investigate the adaptability of the Project hybrid model; explore whether opportunities to explore design under low-risk conditions of simulation and gaming lead to concentration, enjoyment, persistence, and goal focus (as predicted by PCT and flow theory); and assess if these opportunities increase self-efficacy (self-confidence) and lead to enhanced achievement.

Research questions will focus on operational design; on exploration of relevant theoretical questions connecting gaming/simulation to motivational theories related to self-efficacy; and on adaptability as follows: 1) Does the Project hybrid model lead to greater enhancement of content knowledge, design products, and self-efficacy/attitudes related to technology and group work than use of the business as usual or simulation only models? 2) Is there differential impact on learning across the three conditions related to student background characteristics (e.g., gender, disability, prior academic achievement, and prior exposure to computer gaming/simulation)? 3) Does the gaming and simulation condition satisfy flow theory and perceptual control theory criteria concerning concentration and enjoyment? 4) If so, how are student task engagement, concentration, enjoyment, and perceived goal-driven outcomes (key characteristics of flow theory and perceptual control theory) related to student learning in the gaming and simulation tasks? 5) What are the linear and nonlinear relationships between student self-efficacy and engagement during the simulation experience? 6) Can teachers adapt the prototypical materials to other curriculum areas and contexts using the instructor design interface and maintain student engagement and learning?

The experimental condition testing will involve 12 teachers with an average of 20 students each (n = 240 students). The other two conditions will have four teachers each, a sample size large enough to detect significant differences. Attempts will be made to randomly assign students to conditions. When random assignment is not possible, the groups will be statistically equated. For each research condition, data will be collected in six assessment domains: student variables (gender, age, prior academic achievement, prior experiences with gaming and simulation); teacher/school context variables (hardware availability, teacher experience with and attitudes about simulations and serious gaming); content knowledge (pre-post assessments of KSB knowledge assessed through multiple choice questions, and those requiring explanations as per Linn et al. [30]); embedded assessments (design products, projects, and design reports); affective assessments (pre-post self-efficacy and attitudinal ratings about the technological tasks, using the computer as a learning tool, doing KSBs online, and working in teams); process measures of engagement/attention (participation time and levels within the virtual and physical design tasks as measured by logging software and wikis to track individual student work, teacher and student ratings, degree of collaboration, and conflict resolution).

Data will be analyzed using a variety of multivariate statistical analyses. The research and summative evaluation will identify not only how effective the three conditions are in promoting student learning and affective changes but also for which students and under what conditions they produce the strongest outcomes. These results will inform the new curriculum and contribute new knowledge related to the use of computer simulation and gaming as educational tools.

Automatic Logging. The Project will use automatic computer-based tracking technology developed by the Concord Consortium. The technology can track students’ solution paths, log software use, and yield valuable data on utilization and dissemination. This is an important innovation for program assessment as it addresses the difficult problem of tracking time on task and level of engagement. The logs will be far more accurate and credible than self-reports.