Afterschool Universe is a nationally recognized hands-on astronomy program targeted at middle school children in out-of-school-time settings. The rigorously evaluated curriculum explores basic astronomy concepts through engaging hands-on activities and focuses on the Universe outside the solar system.
- Increase and Sustain Youth and Public Engagement in STEM
In 2007, the team contracted with Magnolia Consulting, LLC, an independent research and evaluation consulting firm, to conduct an evaluation of the Beyond Einstein Explorer’s Program (BEEP, re-named Afterschool Universe after the pilot). The purpose of the study was to assess the effectiveness of the BEEP training component and session materials and activities. Specifically this evaluation study addressed the following overarching questions:
• How do program leaders perceive the quality and utility of the BEEP training, session manual and educational resources?
• In what ways can the BEEP program be improved to more effectively meet the needs of out-of-school providers and their students?
• What impact does the BEEP program have on students’ attitudes toward science?
A research design involving qualitative and quantitative methods was employed to examine the effectiveness of the BEEP program for use with middle-school students. Data collected in the study included a training feedback survey, a student attitude survey, on-line session logs, and a final leader survey. The findings from this evaluation of pilot efforts were used to further refine the curriculum and professional development for the program when it formally launched as a nationwide effort under the Afterschool Universe name in 2008. The findings of this evaluation effort will be discussed in the next section and the final evaluation report is below.
The next phase of the program moved to a model of training trainers rather than front-line implementers. The efficacy and accountability of this model needed to be assessed, and Cornerstone Evaluation Associates, LLC, conducted this evaluation effort. The overarching evaluative questions addressed included, but were not limited to, the following:
• Can a train-the-trainer model work for disseminating the Afterschool Universe program?
• Is the train-the-trainer model effective at all tiers – trainers, implementers, and students?
• How much confidence can be attributed to the trainers when they are not NASA-based program leaders, that is, does a train-the-trainers model work as well as a direct training model where NASA leaders instruct front-line implementers?
• Where does the train-the-trainer dissemination model work best?
• Where are the challenges?
• How can program leaders improve the weak areas of distribution?
In order to find these answers, both quantitative and qualitative information were collected over a period of approximately 1 1/2 years from three groups participating in this study – trainers, front-line implementers and students. Trainers were instructed directly by NASA personnel in using the Afterschool Universe curriculum. The trainers then conducted workshops to teach front-line implementers how to use this curriculum. Students in various afterschool environments then received instruction from these implementers. By focusing the evaluative lens on each of these tiers, data could be ‘triangulated’ in order to reveal answers to the research questions. This research and analysis is of benefit to both NASA EPO providers and afterschool program providers outside of the Afterschool Universe program who wish to strategically reach participants through a train-the-trainer model.
Beyond these periods of formal evaluation, the AU team continues to internally evaluate training workshops through the collection of surveys. In addition, some implementation sites voluntarily submit data from their implementers and participants. This feedback continues to direct the refinement and future direction of the program.
In an effort to assess changes in students’ attitudes toward science as a result of participating in the Afterschool Universe pilot program in 2007, students were given a survey in the first and last week of program participation. Students were asked to respond to 15 items on a 4-point scale (1= really don’t agree, 2 =don’t agree, 3= agree, 4= really agree). An overall score was calculated for each student. Scores had a possible range of 15 (if a student rated all items 1) to 60 (if a student rated all items 4). There was no change in mean scores from the first to the second administration of the attitude survey. Students began with a somewhat positive attitude toward science and astronomy, and ended with virtually the same measured attitude.
A second way to view the student attitude results was to examine the percentage of positive responses to the survey items. This was determined by calculating the frequency of positive responses (either agree or really agree) for all items combined. In this analysis, a value of 50% would indicate that students were neutral in their attitudes toward science. Examining the student data from this perspective again indicates that students began with a fairly positive attitude and ended with relatively no change, but maintained the positive attitude throughout the program.
Eight additional items were included in the post survey in an effort to gauge changes in students’ perceptions about their knowledge about astronomy. Students were asked to rate their level of agreement with statements concerning how much they learned about various aspects of Astronomy through participating in the sessions. These items were rated on a 4-point scale (1=really don’t agree, 2=don’t agree, 3=agree, 4=really agree). All statements elicited a positive response (agree or really agree) from a high percentage of the students, indicating that they perceived their learning over the course of the program to be significant. In addition, the statement “I know a lot about how scientists study stars and planets” was included in both the pre and post surveys. Positive responses to this item increased significantly (from 48% to 70%) as a result of participation in the pilot program.
In addition, feedback from program implementers and students alike provided valuable insight into their experiences and attitudes:
• Overall perceptions of the BEEP program were very positive. The majority of leaders indicated they would be happy to lead the program again.
• Leaders found the BEEP manual to be clearly organized, detailed and easy to follow. Background material was especially helpful in building a knowledge base for both leaders and students.
• Leaders felt that the BEEP program met their expectations including exposing students to science and astronomy, fostering interest for these subjects in their students, and teaching their students about basic astronomy concepts.
• Several leaders mentioned how much they had learned as a result of leading the BEEP sessions. Since the majority of leaders had little science background, this is an important finding.
• BEEP activities that were most successful were those that allowed for hands-on exploration and the chance for students to work in groups, such as the creation of models of the Universe.
• Students particularly liked activities that allowed them to create, explore and “experiment.”
The information obtained through these pilot evaluations drove a number of program refinements, leading to a high level of confidence in the quality of the program and in the training currently offered to front-line implementers by our group.
Beginning in 2010, the train-the-trainer model performed pre- and post-assessment of participants at all three tiers of the effort – trainers (Tier 1), front-line implementers (Tier 2), and students (Tier 3). Of utmost concern was the effectiveness of the model in ensuring the conveyance of AU content knowledge throughout its three tiers. Growth in knowledge was measured in two ways—1) the trainers’ and implementers’ perceptions of their own knowledge and 2) the pre-post knowledge assessments/tests given to participants at all three tiers.
In order to gather information about the trainers’ and implementers’ perceptions of knowledge, each group was asked to rate statements using a 5-point scale with ‘1-strongly disagree’ through ‘5-strongly agree’. Ratings were collected from trainers and implementers alike at the end of their workshops and in online follow-up surveys. The mean ratings for Tier 1 and Tier 2 reveal an average initial gain (from prior to the workshop to its end) of 1.7 and 1.5, respectively. Both tiers continued to retain their gains in knowledge following their workshop experiences.
Additional evidence supplied by the pre-post tests gives weight to the assertion that knowledge was successfully conveyed throughout all three tiers. A critical piece of the evaluation was the series of questions (multiple choice, fill-in-the-blank, ranking, and open ended) designed to assess the AU-related content knowledge of trainers and implementers both before and after their workshops. A separate assessment was developed for students. In both cases, questions were designed to reflect the content of the Afterschool Universe curriculum and ‘test’ common misconceptions about AU content that the program seeks to dispel.
AU program participants in each tier demonstrated a positive change in average percentage of correct answers ranging from 26% to 32%. These positive gains confirm the perceptions of trainers and implementers that they not only gained a good deal of AU-related content knowledge during their workshops, but also retained this knowledge over the subsequent year. For those implementers who taught the curriculum, their students also achieved a positive gain in knowledge. The assertion can, therefore, be made that the model allowed for knowledge to be successfully transferred within the ‘train-the-trainer’ model.
Employing the ‘train-the-trainer’ model, the AU program also sought to ensure that implementers—ultimately bearing the responsibility for delivering the program to their network youth—were not only knowledgeable, but also felt comfortable with the AU curriculum and had confidence in their ability to present it. Findings show that the trainers emerged from their workshop exhibiting high levels of comfort and confidence. More importantly, they were then able to instill the same high levels of these qualities in the implementers for whom they conducted workshops.
At the end of their workshop at NASA Goddard, trainers rated statements about their ‘confidence in understanding AU content’ and in their ‘ability to present it’. Trainers’ mean ratings for these questions on ‘confidence in understanding AU content’ was a 3.6 on a 4-point scale, while their ‘confidence in presenting AU content’ was a 4.0 on a 5-point scale. Clearly, trainers left their workshop feeling comfortable and confident about embarking on their journeys to conduct their own workshops.
Rating scales were used again to measure comfort and confidence in the follow-up surveys for both trainers and implementers. That is, once trainers had had an opportunity to conduct workshops and implementers had had time to instruct students in their afterschool networks, they were asked how their respective workshops and instructional sessions had gone, how their workshops/instruction were received, how comfortable they felt conveying AU content and if they saw evidence that their respective audiences would be able to use what they had taught them. These data were remarkable in their consistency between trainers and implementers—both rating their comfort and confidence at the same high level, trainers at 4.2 and implementers at 4.3 (both on a 5-point scale).
Trainers and implementers were also asked to rate a series of questions about their sense of preparedness for the tasks at hand—conducting workshops for the trainers and instructing youth for implementers. They were presented with the same questions both at the end of their respective trainings and in the follow- up surveys. These questions asked how sufficient they believed their training to be and whether they thought they had enough content knowledge and pedagogical knowledge to teach AU content.
The data show that both trainers and implementers have high levels of ‘a sense of preparedness’ once they are trained and that they maintain these high levels once they go out into the ‘real world’ to teach the AU program to their respective audiences. For trainers, a mean cluster rating of 4.4 after training stays steady after conducting workshops, while a mean cluster rating of 4.2 after training for the implementers bumps up slightly to 4.3 after instructing their students.
The multi-level evaluation of the train-the-trainer dissemination model addressed questions about the program’s accountability as well as its outcomes. The train-the-trainer model has clearly demonstrated its efficacy in delivering content knowledge throughout all three tiers of the model. Furthermore, the evidence indicates that the ‘expert’ leaders can trust trainers not only to maintain high quality in transferring AU content knowledge to implementers, but also to instill in the implementers the same levels of comfort, confidence and preparedness that the ‘experts’ imparted to them.
The final evaluation report prepared by Cornerstone Evaluation Associates for the train-the-trainer effort conducted between 2010 and 2012 is available below (AU_Evaluation.pdf).