Afterschool Universe

Overview
Program Element / Activity Title:
Afterschool Universe
E/PO Project Name:
NASA Goddard Astrophysics Science Division
Program Element / Activity Website:
Short Description:

Afterschool Universe is a nationally recognized hands-on astronomy program targeted at middle school children in out-of-school-time settings. The rigorously evaluated curriculum explores basic astronomy concepts through engaging hands-on activities and focuses on the Universe outside the solar system.

Awards Received:
NASA Honor Award for Public Service Group Achievement (2009)
Program Element / Activity Status
Please share any additional updates or information about your program element / activity:
Afterschool Universe is listed in the Consumers Guide to Afterschool Science Resources that is operated in cooperation with SEDL, the National Partnership for Quality Afterschool Learning, and the US Dept. of Education. The curriculum has also been evaluated and recommended by Great Science for Girls as one of seven curricula on their site which successfully stimulates girls’ interest in STEM subjects and instills self-confidence in their abilities. The Afterschool Universe program was highlighted in a November 2014 report entitled "Women and Girls of Color: Addressing Challenges and Expanding Opportunity" from the White House Council on Women and Girls, that focuses on attracting underrepresented girls to science.
Please upload any program element / activity documents (reports, publications, or other) to share with the Public, Community, and SMD:
Audience Metrics
Who is the primary audience of your program element / activity?:
Who is the secondary audience of your program element / activity?:
Evaluation
National Priorities and Coordination Approaches as Articulated in CoSTEM:
  • Increase and Sustain Youth and Public Engagement in STEM
What are the goals and objectives of your program element / activity?:
The overarching goals of the Afterschool Universe program are to increase participant comfort and engagement with STEM, and to increase awareness and understanding about concepts in astronomy. Additional goals and objectives have been set for specific efforts during the course of program development and dissemination.
What is the design of the evaluation process for your program element / activity?:
Pilot training for front-line implementers was assessed by the Out-of-School Time Resource Center (OSTRC) at the University of Pennsylvania in the first year of the pilot (2006). Evaluation of the program implementation and impact on students was conducted by the NASA EPO team in this first year. Each subsequent phase of the Afterschool Universe effort – curriculum development and refinement, pilot implementation, and dissemination – has been evaluated by an external evaluator. The program has three separate evaluation plans for different phases: an initial evaluation (by Magnolia Consulting) for program development and pilot implementation in 2007, an evaluation of the train-the-trainer effort from 2010-2013 (by Cornerstone Evaluation Associates), and an evaluation of the elementary school adaptation to be completed during FY2014 (also by Cornerstone Evaluation Associates).

In 2007, the team contracted with Magnolia Consulting, LLC, an independent research and evaluation consulting firm, to conduct an evaluation of the Beyond Einstein Explorer’s Program (BEEP, re-named Afterschool Universe after the pilot). The purpose of the study was to assess the effectiveness of the BEEP training component and session materials and activities. Specifically this evaluation study addressed the following overarching questions:
• How do program leaders perceive the quality and utility of the BEEP training, session manual and educational resources?
• In what ways can the BEEP program be improved to more effectively meet the needs of out-of-school providers and their students?
• What impact does the BEEP program have on students’ attitudes toward science?

A research design involving qualitative and quantitative methods was employed to examine the effectiveness of the BEEP program for use with middle-school students. Data collected in the study included a training feedback survey, a student attitude survey, on-line session logs, and a final leader survey. The findings from this evaluation of pilot efforts were used to further refine the curriculum and professional development for the program when it formally launched as a nationwide effort under the Afterschool Universe name in 2008. The findings of this evaluation effort will be discussed in the next section and the final evaluation report is below.

The next phase of the program moved to a model of training trainers rather than front-line implementers. The efficacy and accountability of this model needed to be assessed, and Cornerstone Evaluation Associates, LLC, conducted this evaluation effort. The overarching evaluative questions addressed included, but were not limited to, the following:

• Can a train-the-trainer model work for disseminating the Afterschool Universe program?
• Is the train-the-trainer model effective at all tiers – trainers, implementers, and students?
• How much confidence can be attributed to the trainers when they are not NASA-based program leaders, that is, does a train-the-trainers model work as well as a direct training model where NASA leaders instruct front-line implementers?
• Where does the train-the-trainer dissemination model work best?
• Where are the challenges?
• How can program leaders improve the weak areas of distribution?

In order to find these answers, both quantitative and qualitative information were collected over a period of approximately 1 1/2 years from three groups participating in this study – trainers, front-line implementers and students. Trainers were instructed directly by NASA personnel in using the Afterschool Universe curriculum. The trainers then conducted workshops to teach front-line implementers how to use this curriculum. Students in various afterschool environments then received instruction from these implementers. By focusing the evaluative lens on each of these tiers, data could be ‘triangulated’ in order to reveal answers to the research questions. This research and analysis is of benefit to both NASA EPO providers and afterschool program providers outside of the Afterschool Universe program who wish to strategically reach participants through a train-the-trainer model.

Beyond these periods of formal evaluation, the AU team continues to internally evaluate training workshops through the collection of surveys. In addition, some implementation sites voluntarily submit data from their implementers and participants. This feedback continues to direct the refinement and future direction of the program.
What are the main impacts of your effort to date and how do they correlate to the project's goals and objectives:
Evaluation during the development and initial dissemination of the Afterschool Universe program, as well as of the train-the-trainer model of dissemination, sought to assess the impacts on participant knowledge and attitudes. Evaluation findings showed positive outcomes for each of these impacts.

In an effort to assess changes in students’ attitudes toward science as a result of participating in the Afterschool Universe pilot program in 2007, students were given a survey in the first and last week of program participation. Students were asked to respond to 15 items on a 4-point scale (1= really don’t agree, 2 =don’t agree, 3= agree, 4= really agree). An overall score was calculated for each student. Scores had a possible range of 15 (if a student rated all items 1) to 60 (if a student rated all items 4). There was no change in mean scores from the first to the second administration of the attitude survey. Students began with a somewhat positive attitude toward science and astronomy, and ended with virtually the same measured attitude.

A second way to view the student attitude results was to examine the percentage of positive responses to the survey items. This was determined by calculating the frequency of positive responses (either agree or really agree) for all items combined. In this analysis, a value of 50% would indicate that students were neutral in their attitudes toward science. Examining the student data from this perspective again indicates that students began with a fairly positive attitude and ended with relatively no change, but maintained the positive attitude throughout the program.

Eight additional items were included in the post survey in an effort to gauge changes in students’ perceptions about their knowledge about astronomy. Students were asked to rate their level of agreement with statements concerning how much they learned about various aspects of Astronomy through participating in the sessions. These items were rated on a 4-point scale (1=really don’t agree, 2=don’t agree, 3=agree, 4=really agree). All statements elicited a positive response (agree or really agree) from a high percentage of the students, indicating that they perceived their learning over the course of the program to be significant. In addition, the statement “I know a lot about how scientists study stars and planets” was included in both the pre and post surveys. Positive responses to this item increased significantly (from 48% to 70%) as a result of participation in the pilot program.

In addition, feedback from program implementers and students alike provided valuable insight into their experiences and attitudes:
• Overall perceptions of the BEEP program were very positive. The majority of leaders indicated they would be happy to lead the program again.
• Leaders found the BEEP manual to be clearly organized, detailed and easy to follow. Background material was especially helpful in building a knowledge base for both leaders and students.
• Leaders felt that the BEEP program met their expectations including exposing students to science and astronomy, fostering interest for these subjects in their students, and teaching their students about basic astronomy concepts.
• Several leaders mentioned how much they had learned as a result of leading the BEEP sessions. Since the majority of leaders had little science background, this is an important finding.
• BEEP activities that were most successful were those that allowed for hands-on exploration and the chance for students to work in groups, such as the creation of models of the Universe.
• Students particularly liked activities that allowed them to create, explore and “experiment.”

The information obtained through these pilot evaluations drove a number of program refinements, leading to a high level of confidence in the quality of the program and in the training currently offered to front-line implementers by our group.

Beginning in 2010, the train-the-trainer model performed pre- and post-assessment of participants at all three tiers of the effort – trainers (Tier 1), front-line implementers (Tier 2), and students (Tier 3). Of utmost concern was the effectiveness of the model in ensuring the conveyance of AU content knowledge throughout its three tiers. Growth in knowledge was measured in two ways—1) the trainers’ and implementers’ perceptions of their own knowledge and 2) the pre-post knowledge assessments/tests given to participants at all three tiers.

In order to gather information about the trainers’ and implementers’ perceptions of knowledge, each group was asked to rate statements using a 5-point scale with ‘1-strongly disagree’ through ‘5-strongly agree’. Ratings were collected from trainers and implementers alike at the end of their workshops and in online follow-up surveys. The mean ratings for Tier 1 and Tier 2 reveal an average initial gain (from prior to the workshop to its end) of 1.7 and 1.5, respectively. Both tiers continued to retain their gains in knowledge following their workshop experiences.

Additional evidence supplied by the pre-post tests gives weight to the assertion that knowledge was successfully conveyed throughout all three tiers. A critical piece of the evaluation was the series of questions (multiple choice, fill-in-the-blank, ranking, and open ended) designed to assess the AU-related content knowledge of trainers and implementers both before and after their workshops. A separate assessment was developed for students. In both cases, questions were designed to reflect the content of the Afterschool Universe curriculum and ‘test’ common misconceptions about AU content that the program seeks to dispel.

AU program participants in each tier demonstrated a positive change in average percentage of correct answers ranging from 26% to 32%. These positive gains confirm the perceptions of trainers and implementers that they not only gained a good deal of AU-related content knowledge during their workshops, but also retained this knowledge over the subsequent year. For those implementers who taught the curriculum, their students also achieved a positive gain in knowledge. The assertion can, therefore, be made that the model allowed for knowledge to be successfully transferred within the ‘train-the-trainer’ model.

Employing the ‘train-the-trainer’ model, the AU program also sought to ensure that implementers—ultimately bearing the responsibility for delivering the program to their network youth—were not only knowledgeable, but also felt comfortable with the AU curriculum and had confidence in their ability to present it. Findings show that the trainers emerged from their workshop exhibiting high levels of comfort and confidence. More importantly, they were then able to instill the same high levels of these qualities in the implementers for whom they conducted workshops.

At the end of their workshop at NASA Goddard, trainers rated statements about their ‘confidence in understanding AU content’ and in their ‘ability to present it’. Trainers’ mean ratings for these questions on ‘confidence in understanding AU content’ was a 3.6 on a 4-point scale, while their ‘confidence in presenting AU content’ was a 4.0 on a 5-point scale. Clearly, trainers left their workshop feeling comfortable and confident about embarking on their journeys to conduct their own workshops.

Rating scales were used again to measure comfort and confidence in the follow-up surveys for both trainers and implementers. That is, once trainers had had an opportunity to conduct workshops and implementers had had time to instruct students in their afterschool networks, they were asked how their respective workshops and instructional sessions had gone, how their workshops/instruction were received, how comfortable they felt conveying AU content and if they saw evidence that their respective audiences would be able to use what they had taught them. These data were remarkable in their consistency between trainers and implementers—both rating their comfort and confidence at the same high level, trainers at 4.2 and implementers at 4.3 (both on a 5-point scale).

Trainers and implementers were also asked to rate a series of questions about their sense of preparedness for the tasks at hand—conducting workshops for the trainers and instructing youth for implementers. They were presented with the same questions both at the end of their respective trainings and in the follow-
up surveys. These questions asked how sufficient they believed their training to be and whether they thought they had enough content knowledge and pedagogical knowledge to teach AU content.

The data show that both trainers and implementers have high levels of ‘a sense of preparedness’ once they are trained and that they maintain these high levels once they go out into the ‘real world’ to teach the AU program to their respective audiences. For trainers, a mean cluster rating of 4.4 after training stays steady after conducting workshops, while a mean cluster rating of 4.2 after training for the implementers bumps up slightly to 4.3 after instructing their students.

The multi-level evaluation of the train-the-trainer dissemination model addressed questions about the program’s accountability as well as its outcomes. The train-the-trainer model has clearly demonstrated its efficacy in delivering content knowledge throughout all three tiers of the model. Furthermore, the evidence indicates that the ‘expert’ leaders can trust trainers not only to maintain high quality in transferring AU content knowledge to implementers, but also to instill in the implementers the same levels of comfort, confidence and
preparedness that the ‘experts’ imparted to them.
Have your evaluation findings / impacts been published? If so, where?:
The final evaluation report prepared by Magnolia Consulting for the pilot training and implementation in 2007 is available below (BEEP_Evaluation.pdf).

The final evaluation report prepared by Cornerstone Evaluation Associates for the train-the-trainer effort conducted between 2010 and 2012 is available below (AU_Evaluation.pdf).
Please upload program element / activity evaluation documents (logic models, tools, reports, publications, IRB documentation, or other documents) to share with the Public, the Community, and SMD: